It may be easier to see with a simple example - one type of compression (probably the easiest to understand, but only used for certain types of data) called RLE or Run Length Encoding would take something and compress it by values that are next to each other - basically if the file had a large sequence of a given value, it would reduce that sequence to a single representation of that value and a multiplier to let the decompression technology know how to rebuild the original file - so, for example if you had:
a file containing:
HHHHHHHHHBBBBBBBBBCCCCCCCCCEEEEDDDDDD
it would be compressed to:
9H9B9C4E6D
Which as you can tell, is a very small file as is compared to the original, it went from 37 bytes down to 10 bytes - a huge savings as the file is now less than 1/3 the original size. Now, let's say you could compress it again, using the same general method of encoding, you might end up with:
191H191B141E161D
I've now increased the file from 10 bytes long to 16 bytes long (since the numbers would be one value, and the letters would be another value and each would be a single "chain" of values).
As you can see, in this very simple and very impractical example, re-compressing a compressed file is inefficient and can cause you to end up with a file larger than the original.
As you can also see, a RLE compression or encoding of the data really reduces the file size when it comes to a file with large groups of same values next to each other.
One might ask why people distribute compressed files that aren't really compressed on the net? Well, for one reason, a lot of sites, email servers, etc. won't allow an executable to be sent - so putting it in a zip file fixes that issue. Another reason is if there are a group of compressed files that make up something - you might have another compressed file holding the group of compressed files (may seem like a waste of time, but it makes it easier for some places to do it that way then to re-pack a group of files)
So yes, ultimately, what you said - compressing a compressed file doesn't do anything - but could actually make matters worse. Compressing an uncompressed file tho, with an algorithm that fits the data, can greatly reduce the overall file size.