Well, I think you are missing something more basic. When the signal strength is low due to background noise, obstacles, distance, etc. the the receiver will tell the transmitter to transmit at a slower speed. The major impediment to speed is usually signal attenuation, not protocol overhead or error correction/retransmission.
I think this is nit-picking semantics.
The statement I made does not assign any proportion to various components of a signal, it simply attempts to list them. The error correction coding is a fixed ratio of any transmission or retransmission. Signal attenuation, in and of itself, leads to higher error in transmission and when significant errors are detected, retransmission is required. This is how signal attenuation affects user perceived transmission speeds.
Transceivers that are designed to drop to a lower transmission rate do so because lower bandwidth signals are typically more robust, meaning a higher percentage of the transmitted data gets through without significant errors that require retransmission. At poor enough SNR, the performance of the lower bandwidth signal can exceed that of the higher bandwidth signal which is doing more retransmitting.
Good design practice suggests that the signaling method that yields the highest actual throughput should be selected, even if it has lower theoretical throughput. In the reference section, I included a link to a paper that examines selecting various modulations to maximize throughput for variable SNR.
I think you misunderstand the overhead issue for ethernet too. Signal attenuation is basically a non-issue for 10/100/1000 base-T is you stay within state cable length limits. Unless there is something seriously wrong with your ethernet cable, error correction and ethernet protocol are pretty insignificant overheads. You should be able to get 95% of stated bandwidth pretty easily.
If you are only getting 70% through 10/100/1000, then it is a combination of MTU size, IP overhead, TCP overhead, OS network stack performance, and application performance. It is really not too difficult to get 90% of ethernet line rates at the application level with proper software design.
We can get about 90% of 10-GigE rates. At those higher speeds, having a fast enough bus on your motherboard and fast enough cpu and well designed application network code are far more significant factors. Ethernet overhead is basically a non-issue.
I think you are significantly overstating the ethernet overhead issue.
I never said that signal attenuation was a significant culprit in reduced Ethernet throughput, and the 70% value is likely conservative and not necessarily the value I would expect to get under all conditions; it was intended as a floor for performance.
The number is based on several sources including a Dell white paper that quoted results from
a 2003 Los Alamos test of a 10GbE connection. It is fairly likely that performance has improved over the intervening years, but my intent was to provide a conservative number for goodput that had published backing.
The 70% number also likely includes issues such as data collision on congested networks under high utilization and bottle necking.
With prices for 10 GbE equipment at
Newegg in the four figure range, I imagine few consumers have 10 GbE equipment yet and likely do not have a well designed professional network that avoids issues that do slow throughput over Ethernet.
The intent is to make sure no one over expects performance on a wide range of possible configurations.