Alternatives for AV Home Networking

A

admin

Audioholics Robot
Staff member
The ongoing convergence of AV and computing is inevitable, rooted in the dawn of digital media with the advent of the CD, and nurtured by the Internet. Media servers, multimedia gaming consoles, HTPC, networked AV receivers, mp3 player docks, IPTV; digital entertainment is becoming as at home on computers as it is on traditional AV gear. However, all of this crosspollination between the two often leaves entertainment stored in disparate locations so a reliable connection is required to transfer the entertainment files between devices.

For those who find the shortcomings of Wi-Fi make it less than an ideal solution, there are other methods available that are not as onerous as pulling Ethernet cable through walls and attics by making use of existing home wiring systems. These methods also provide connections that are more secure with better data throughput and reliability. There is a reason that mission critical business systems and servers are primarily hardwired and not connected by Wi-Fi.


Discuss "Alternatives for AV Home Networking" here. Read the article.
 
davidtwotrees

davidtwotrees

Audioholic General
Excellent, informative article! I'm not a geek and tend to muddle through all things techincal, and I picked up on most of the article's points. I tried wifi in my concrete shell apartment and it was terrible. I currently use a router with wired connections from my pc to my streaming blu ray player (samsung 2550), and another to my media server (escient fireball se80).
 
morrone

morrone

Enthusiast
Corrections

"For wireless transmission, the difference in maximum theoretical rate and actual rate are due to protocol overhead and error correction/retransmission. Anything that causes signal attenuation, distance and physical obstructions, will increase overhead for error correction, slowing useable data rates as a function of the signal to noise ratio with higher bit rate transmissions being more susceptible to noise."

Well, I think you are missing something more basic. When the signal strength is low due to background noise, obstacles, distance, etc. the the receiver will tell the transmitter to transmit at a slower speed. The major impediment to speed is usually signal attenuation, not protocol overhead or error correction/retransmission.

I think you misunderstand the overhead issue for ethernet too. Signal attenuation is basically a non-issue for 10/100/1000 base-T is you stay within state cable length limits. Unless there is something seriously wrong with your ethernet cable, error correction and ethernet protocol are pretty insignificant overheads. You should be able to get 95% of stated bandwidth pretty easily.

If you are only getting 70% through 10/100/1000, then it is a combination of MTU size, IP overhead, TCP overhead, OS network stack performance, and application performance. It is really not too difficult to get 90% of ethernet line rates at the application level with proper software design.

We can get about 90% of 10-GigE rates. At those higher speeds, having a fast enough bus on your motherboard and fast enough cpu and well designed application network code are far more significant factors. Ethernet overhead is basically a non-issue.

I think you are significantly overstating the ethernet overhead issue.
 
ivseenbetter

ivseenbetter

Senior Audioholic
This is a good article. It definitely raises some good points and brings these potential solutions to light for folks who may be interested. I had looked into this a few years back and the limitations were such that it didn't make sense to utilize it. However, with the info that is provided here it sounds like things are changed. I'll definitely take another look at these solutions.
 
DavidW

DavidW

Audioholics Contributing Writer
Well, I think you are missing something more basic. When the signal strength is low due to background noise, obstacles, distance, etc. the the receiver will tell the transmitter to transmit at a slower speed. The major impediment to speed is usually signal attenuation, not protocol overhead or error correction/retransmission.
I think this is nit-picking semantics.

The statement I made does not assign any proportion to various components of a signal, it simply attempts to list them. The error correction coding is a fixed ratio of any transmission or retransmission. Signal attenuation, in and of itself, leads to higher error in transmission and when significant errors are detected, retransmission is required. This is how signal attenuation affects user perceived transmission speeds.

Transceivers that are designed to drop to a lower transmission rate do so because lower bandwidth signals are typically more robust, meaning a higher percentage of the transmitted data gets through without significant errors that require retransmission. At poor enough SNR, the performance of the lower bandwidth signal can exceed that of the higher bandwidth signal which is doing more retransmitting.

Good design practice suggests that the signaling method that yields the highest actual throughput should be selected, even if it has lower theoretical throughput. In the reference section, I included a link to a paper that examines selecting various modulations to maximize throughput for variable SNR.

I think you misunderstand the overhead issue for ethernet too. Signal attenuation is basically a non-issue for 10/100/1000 base-T is you stay within state cable length limits. Unless there is something seriously wrong with your ethernet cable, error correction and ethernet protocol are pretty insignificant overheads. You should be able to get 95% of stated bandwidth pretty easily.

If you are only getting 70% through 10/100/1000, then it is a combination of MTU size, IP overhead, TCP overhead, OS network stack performance, and application performance. It is really not too difficult to get 90% of ethernet line rates at the application level with proper software design.

We can get about 90% of 10-GigE rates. At those higher speeds, having a fast enough bus on your motherboard and fast enough cpu and well designed application network code are far more significant factors. Ethernet overhead is basically a non-issue.

I think you are significantly overstating the ethernet overhead issue.
I never said that signal attenuation was a significant culprit in reduced Ethernet throughput, and the 70% value is likely conservative and not necessarily the value I would expect to get under all conditions; it was intended as a floor for performance.

The number is based on several sources including a Dell white paper that quoted results from a 2003 Los Alamos test of a 10GbE connection. It is fairly likely that performance has improved over the intervening years, but my intent was to provide a conservative number for goodput that had published backing.

The 70% number also likely includes issues such as data collision on congested networks under high utilization and bottle necking.

With prices for 10 GbE equipment at Newegg in the four figure range, I imagine few consumers have 10 GbE equipment yet and likely do not have a well designed professional network that avoids issues that do slow throughput over Ethernet.

The intent is to make sure no one over expects performance on a wide range of possible configurations.
 
Last edited:

Latest posts

newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top