All codecs will vary the signal and may introduce some loss depending on the implemented algorithm. It is still a digital signal so you will either get what is distributed post algorithm or you won't, there is no in between. The losses that were tolerable with analog (leading to snow) are not tolerable with digital; which is the core of my point. You will either get what the company is sending you or you won't.
Also, depending on the processing power of the cable/sat box there are so many algorithms available to encode the data. I used a simple algorithm with an image acquistion application (running at 160 Hz) and I was able to lossly reduce the data load by up to 500% in real time. I'm sure the cable/sat companies spend a lot more time than I had to come up with new algorithms that creatively sqeeze alot of data into a small pipe.
Designing a codec is a series of trade-offs with questions like how much loss is tolerable before its noticed and then how can it be reduced. For instance the human eye can only distinguish 32 different colors from a pallette at any given time, I think they would be foolish not to use a keyed lookup table based on that, and I'm sure they do. Do they bias the algorithm to compress temporally or spatially (I'm sure they use a compression with loss); I would rather have a slight delay than a blury football. How is the sound encoded, do they send a video frame with audio multilexed in, or do they attach audio as header somewhere. As long as the box can decode what is put in does it matter? Also, at this point doesn't it become a conversation as to which algorithms do you prefer? I also wonder how many of the codecs are used by both cable & sat companies.
Anthony