I do have a question and you are the guy to answer. Ill try my best to get this across so bear with me lol. Isnt any loss between an hdmi connection (or any digital connection) 'corrected' as part of the interface? So say a few 1's and 0's get lost or scrambled, the digital interface will correct it or make due? Yes, some data might be lost, but at a minimal that we couldn't tell the difference unless test equipment tells us?
Not really the case with HDMI. It's a one-way protocol (there is some two-way communication with the right gear at both ends, but it's not possible to correct the AV transmission that way; it just sets things like resolution). So it really is a matter of either the cable does the job or the picture or audio starts to show signs of stress, or fails completely.
Longer cables have a harder time keeping the error rate within the requirements. There are workarounds to that but they mean added costs and perhaps data limits (so, maybe OK with 1080p but not 4K, for example). Those are mostly used by installers who plan the entire system / room.
1's and 0's are sent via analog discreet voltage levels. It's really important to remember that in all cases these are analog devices working with voltages that
represent digital data. The CPU in the computer you are using to read this is an all-analog device. There are no real ones and zeros anywhere in a digital system; they are merely represented by analog signals.* And so on.
There is a bit of a incogruity between how digital theory is taught and how analog theory is taught. In many (most, probably) cases, the student of digital processing is never taught any analog theory ... they are taught that a one is a one and a zero is a zero, and let's move on. The better way to do it would be to teach some analog theory to go with it, but I've spoken to many a digital processing professional who could not cite Ohm's Law. That is a serious error.
So (and this is just illustrative; the actual voltages differ depending on the digital protocol) it's not a matter of 0V = a digital one and 5V = a digital zero, it's more like 2V is the one and 4V is the zero, with a "zero crossing point" of 3V.
If there is sufficient interference or some other issue, it's usually about that zero crossing point. At which voltage above 3V does the interface decide it's dealing with a digital zero? What if the voltage is slow or weak, and and that one never changes to a zero because it's not clearly well above the zero crossing point? As the resolution of your system goes up, the only way to force more digital ones and zeros through the cable is by increasing the frequency of the changes. That requires ever more capable high frequency analog devices and cable transmission capabilities.
That kind of thing is where the problems live.
I've tried to make it easy to understand, you don't need to be an engineer (electrical or digital) to grasp the concepts. The actual workings of all this is quite complex but the principles are simple.
It might also be helpful to note that these protocols are always making mistakes. Most digital systems have robust error correction routines. These generally take care of the error rate and do an excellent job of it.** So the place where errors can be generated is in that analog cable interface, because they cannot always be corrected ... they are assumed to be correct, rightly or wrongly.
* For the sake of correctness, digital ones and zeros do exist, sort of, but in storage media, like the pits and lands of an optical disk. You can use almost any media for storage of digital files ... Vinyl LP records can store digital data, for example, and were once used as such (briefly). Some people might be more familiar with analog tape (cassettes) used in early home computers. Same thing. Note that they are all analog as well, but the data can be very discretely represented, which is a kind of one and zero world.
Once that storage media needs to be used to actually get that data off the disk and to a system, it's all analog electrical signals (versus analog storage in a non-electronic form) from that point.
** For example, a Redbook compliant CD has the music data written to multiple areas of the disk multiple times. When an error is discovered (can't read a scratch in the disk, for example) the CD player can read the data somewhere else and get the correct data out to the rest of the system. This takes time, but there is enough time to do it if the other area isn't damaged as well.
Now, that's at 44.1KHz. What if your data is now 88.2KHz (a Hi-Rez audio file). Now the time to make that correction is halved. That is an example where higher resolutions (like 4K) can introduce problems, or to put another way, where a cable that was fine with 1080p can fail with 4K.