Speaker Cable Length Differences: Do they matter?

gene

gene

Audioholics Master Chief
Administrator
transmission line.jpg


More often than not, I see the common question pop up in our forums regarding speaker cable length differences between two or more speakers.

Folks often wonder if the cable length between the main front channels need to be identical or close to identical.

They are often misinformed by exotic cable vendors or cable forum cult hobbyists that cable lengths need to be kept identical to avoid amplitude or phase/time delay differences between the two. Some even imply speaker cables exhibit transmission line behavior.

Does it really matter, do these claims have any merit?

[Read the Speaker Cable Length Article]
 
Last edited:
L

Leprkon

Audioholic General
gene said:
Folks often wonder if the length between the main front channels need to be identical or close to identical.
so if I understand right, you are better off to short one cable a couple/few feet to avoid looping ? :confused:
 
Francious70

Francious70

Senior Audioholic
I didn't understand a word in that article. Are you basically saying that lengths under 500 ft really don't make a difference?? And to keep the wire lengths only as long as they need to be??

Paul
 
gene

gene

Audioholics Master Chief
Administrator
Looping cable actually doesn't change the properties (namely inductance) significantly unless you have a numerous loops tightly wound.

The point of the article was that cable lengths do not need to be kept identical so long as the lengths are reasonable to begin with. Obviously you don't want the left main speaker cable to be 10ft in length and the right one to be 500ft, but having one cable 2-3 times longer than the other makes little difference. If your cable runs become excessive (greater than 50'), it is recommended to use lower gauge cable (10AWG or less) to minimize losses.
 
Irvrobinson

Irvrobinson

Audioholic Spartan
I like the article, overall. Good stuff. Well reasoned.

Nonetheless, I think Patrick Hart's implication that 20KHz frequencies are relatively unimportant, because only a small percentage of the population can hear them and because they represent an upper harmonic of the highest key on a standard piano, is not as cogent as the rest of the article. First of all, I'm a 48 year-old male and I can easily hear 18KHz test tones, and so can other males and females I know of similarly advanced years. So the notion that only a small percentage of the population could hear 20KHz feels a bit strong, if you know what I mean. Secondly, some percussion instruments, like cymbals, have a lot of energy in frequencies much higher than 5KHz, and I find that response in the octave from 10KHz-20KHz is important to getting recorded cymbals to sound right. (I'm sensitive to cymbals, there are two drummers in my family.) I think it's also fairly safe to say that proper cymbal reproduction is important to a wide range of western music. Obviously, the importance of the top octave will vary by a person's hearing ability, but I can't believe I'm so out of the ordinary.

This doesn't mean that I disagree with the conclusion of the article at all. I've used widely different cable lengths (5ft versus 17ft) before in the left and right channels and there was never an audible difference. I think there is evidence to show that response in the highest octave is very important, it is just that different reasonable lengths of properly engineered speaker cables don't affect it much.
 
Clint DeBoer

Clint DeBoer

Banned
Pat's point wasn't so much focusing on the fact that people couldn't hear those frequencies so much as it was outlining the increased SPL and room evironments necessary to make their perception possible. Add to this this the point of the article which dealt with cable length differences affecting sonics, and I think you'll see what we were getting at.

Taking a listening test and having an isolated 18kHz test tone blasted at you through earphones or speakers is a far cry from trying to discern a somewhat diminished ~18kHz signal behind a full frequency soundtrack.
 
Irvrobinson

Irvrobinson

Audioholic Spartan
Pat's point wasn't so much focusing on the fact that people couldn't hear those frequencies so much as it was outlining the increased SPL and room evironments necessary to make their perception possible. Add to this this the point of the article which dealt with cable length differences affecting sonics, and I think you'll see what we were getting at.

Taking a listening test and having an isolated 18kHz test tone blasted at you through earphones or speakers is a far cry from trying to discern a somewhat diminished ~18kHz signal behind a full frequency soundtrack.
Actually, one of Pat's points was exactly that most people couldn't hear it. While I agree that reasonable differing cable lengths don't make an audible difference, I still think the line of reasoning that high frequency signals are relatively unimportant is basically flawed. I formed that opinion a while back from looking at the spectral analysis of certain recordings on a PC, and I was really surprised at the amount of energy in the highest octave, particularly from cymbals. Pianos do not have significant top octave energy, so that example is not especially relevant.

Furthermore, discussing whether or not one can discern 18KHz tones in a soundtrack is silly. To put the discussion on track, I think there are definitely audible differences between speakers, for example, that have top octave roll-offs that vary in the range of 15-20KHz. Since my own speakers have a supertweeter with a crossover at 12KHz (and it's a ribbon), the application of a little masking tape provides substantial insight into how much information is added in the top octave. It varies by recording, obviously, but unlike good subwoofers that supertweeter has non-trivial output on many music recordings.

I think it's important that objective observers subject their arguments to the same scrutiny as they would apply to subjective arguments. The conclusion of the article, that differing cable lengths are irrelevant, is sound. The argument that the differences due to cable lengths that can be measured are unimportant is also valid, but to extend the argument to say that because we can't hear high frequencies well anyway (at least compared to the middle octaves) these frequencies are unimportant, is poorly formed and not always the case. I don't think anyone could reasonably argue that 20KHz being down even 0.5db is important, but that doesn't mean response reaching to 20KHz is unimportant. That's sort of like saying that you can't taste the difference between 100000 grains of salt and 113500 grains in a certain recipe so, therefore, the salt is unimportant.
 
J

jneutron

Senior Audioholic
Tis interesting that pat says this:
pat said:
A 5 microsecond delay has only been detected with any degree of certainty, under controlled laboratory conditions, within the approximate 3500Hz region where the ear is most sensitive.
Whereas, Nordmark tested and established the ability to discern 5 uSec left/right lateralization from about 500 Hz to in excess of 12 Khz. Jan wrote his paper back in 1976, published in JAS...I have it, should Pat wish to peruse it..so it would appear that Nordmark does not agree with Pat...hmmm..

5 uSec is the timeframe required to discern the location of a point source to within 2 to 3 inches, with the source 10 feet away.

Attached: first, a graph of Nordmark's up to 8 Khz data...I never scanned the 12 Khz graph, but I could... It shows the increased sensitivity of humans to lateral delays, when jitter is included within the sound..one reasonable source of jitter would be the cone displacement distance modulation at reasonable spl's..

Note that the lateralization sensitivity drops quickly with no jitter, climbing off the chart at about 1.2 Khz..this again, does not agree with Pat's statement..

Pat: Where are you getting this information? I would like to read the source papers...

Second attachment...an excel graph of lateralization vs the left/right delay..a sound source that moves 5 inches off midplane will shift the time delay 20 uSeconds..

Cheers, John

Note: yah, I know..I reversed the upload order..sorry bout that.
 

Attachments

R

Richard Black

Audioholic Intern
Funny you should mention that 5us figure. I produced a 'test CD' (for want of a better term - have a look at the 'USHER disc' on www.musaeus.co.uk to see what it's about - yes, I know, shameless commercial plug, sorry) with one track consisting of castanet clicks recorded in mono but panned across from L to R in time increments of 5us keeping amplitude constant. On the whole, listeners seem to find that with practice they can hear the apparent position shift from one click to the next, or is it imagination? Whatever, I'd put a few us as the limit to time perception.

While the basic 'time of flight' delay in a cable is on the order of 5ns per metre, the group delay due to simple LCR filtering can be a lot higher. 10m of typical fig-8 speaker cable will have about 5uH inductance, which will give a time delay, with an 8ohm load, of around 0.6us across the audio band, if my calculations are correct. In other words, to get a time delay that looks even slightly alarming, with typical speaker cable, you'll need several tens of metres of difference between one channel and the other.

Richard
 
J

jneutron

Senior Audioholic
Richard Black said:
Funny you should mention that 5us figure. I produced a 'test CD' (for want of a better term - have a look at the 'USHER disc' on www.musaeus.co.uk to see what it's about - yes, I know, shameless commercial plug, sorry) with one track consisting of castanet clicks recorded in mono but panned across from L to R in time increments of 5us keeping amplitude constant. On the whole, listeners seem to find that with practice they can hear the apparent position shift from one click to the next, or is it imagination? Whatever, I'd put a few us as the limit to time perception.
Nice experiment. I'm devising one with the exact same type of info, but I am taking it a little farther.

1. A motion control system with one speaker on a rail, moving side to side, behind an acoustically transparent curtain. A ruler is on the user side of the curtain, and is used to assign a number to the apparent location of the sound.. Using computer control, vary the location of the speaker at random, using the keyboard to input the perceived location of the source with respect to the ruler. This will establish the listener's ability to locate a simple point source. Care must be taken to control for learning..in other words, all the data must be time stamped to control for experience. It is possible that with time, one can sharpen one's ability to locate sounds.

2. Make multiple cd tracks, within each track repeats of the same delay. The listener has to assign a measurement of perceived location to each track, when played through two speakers this time, both behind the curtain. This can be randomized by using a computer to select the track played, and the keyboard can be used to input the perceived "location" of each randomly played track. Once a track has been "located", the computer selects at random, another track.

3. Using the measurement data, determine the accuracy and standard deviation of the listener's ability to locate the sound in space.

4. This setup must be done in two different ways: with the head locked in rotation, and with allowance for head movement. Each is different in significant ways.

5. Allowing each ear to receive both source waves is also significant. A method is being devised to test both...open air being the default, and an absorbant centerline barrier as the second, preventing ear crosstalk (left ear getting right info). This is significant in that using two sources to image one does not provide the same soundfield temporal stimulus that a point source provides...the fact that we image at all using two speakers is a wonder in itself.

6. Retest the same entities, but use ONLY amplitude variation, with delay set to zero. (this is the old "pan" method of image placement)

7. Develop a three dimensional surface with the axis being : amplitude, time delay, and a metric like "focus", this being how tightly one finds the image accurately. Perhaps using the standard deviation.

This procedure will establish the best use of delay and amplitude for placement of image..for a particular set of speakers, at a specific listening level, at specific angular placement. One does have to worry about the possibility that the listener may find the image varying in perceived distance, though..this confounding issue may have to be considered should it rear it's ugly head..

What I would like from you, Richard, will be a test CD with these specified tracks..At this time, however, I am not ready to proceed. I am presently putting together 4 PC's for this endeavor. I have not considered the type of test signal that I'd want to use.

One twist I am interested in trying: I would love to establish a time vs alcohol consumption measure of my hearing capability..with one moving speaker, do a localization run...then, consume a martini (dry, with olives, straight up, Chopin vodka, of course)...then, do runs with control over time. Course, one possible confounding issue is whether I even want to run the experiment after a martini... :rolleyes:

richard said:
While the basic 'time of flight' delay in a cable is on the order of 5ns per metre, the group delay due to simple LCR filtering can be a lot higher. 10m of typical fig-8 speaker cable will have about 5uH inductance, which will give a time delay, with an 8ohm load, of around 0.6us across the audio band, if my calculations are correct. In other words, to get a time delay that looks even slightly alarming, with typical speaker cable, you'll need several tens of metres of difference between one channel and the other.
Richard
Several issues with the use of an 8 ohm resistive load.
1. Dynamic drivers are current based units..the force on the voicecoil is directly proportional to the current. Your model is voltage based, and carries no reactance.
2. Dynamic drivers apparent mass is a summation of the actual mass of the moving unit, as well as an acceleration based higher mass as a result of the higher pressure air against the cone surface on one side, and a lower mass on the other, with the old PV=nrT non linear equation at work (to make it interesting).. This add on mass is proportional to the current magnitude, and is at twice the frequency of the drive signal, and is apparent only while the driver is accelerating.
3. Current driver models do not include temporal delays..they use simple lumped linear elements, with some non linearities tossed in...but never is a time delay based parameter included, just simplistic elements to try to get close. An example would be a bass reflex...it's steady state response to a single freq is not the same as it's transient response to the same freq. I fear that adding transmission line elements won't work because of the very slow speed of sound, millisecond level times do not model well with T lines..I don't think I can get the dielectric coefficients high enough to slow the t line down.
4. The amp will respond to the voltage at it's terminal, not it's current. How does the amp keep the driving force, which is current, consistent to within 5 uSec, at the load? There is a cable in the way.
5. How does the amp respond to returning energy from a reactive load, and does it control the current timewise accurately despite absorbing load energy? Dan is thinking about that one.
6. We are bandying 5 uSec numbers around, without the key, fundamental issue understood (well, ok, there's more, but I'm keeping this discussion simple)...What is it we are keying onto in our lateralization??? Is it absolute amplitude peak location? Is it Slew rate? Is it zero crossing? Nordmark conjectured that zero crossing was the factor..I take issue with this, as that implies that another sound on one channel can disturb the imaging mechanism. Imagine a sound from one side that has exactly 12 inches wavelength....this would have high pressure on one ear, and low on the other, confounding the zero crossing. Is it even the same key across the frequency band we are sensitive to??


Cheers, John

PS...hey will ya look at that!!!!!!! I suddenly became a "seasoned member"..will wonders never cease!!!.
 
Last edited:

plhart

Audioholic
"Nonetheless, I think Patrick Hart's implication that 20KHz frequencies are relatively unimportant, because only a small percentage of the population can hear them and because they represent an upper harmonic of the highest key on a standard piano, is not as cogent as the rest of the article."

I'm sorry if by my comments it was thought that I was implying frequency extension to 20KHz was not important. I was simply trying to present a viewpoint on wire from that of a speaker designer regarding wire's far superior frequency extension to the point wherein, given a decent cable, speaker designers usually don't put wire back into the design equation.

Audio reproduction is a chain wherein the weakest link is always the loudspeaker/room interface. In the audio analogy the weakest link can most often be equated to frequency bandwidth. A decent wire of the type tested and recommended by Gene will have, as Gene has repeatedly asserted, flat frequency extension past 80KHz at the very least. So unless there is something horribly amiss with the design of the cable it will have no audible effect within the frequencies which a speaker can operate. (Of course internal speaker wiring of adequate gauge and the impedance characteristics of crossover components are taken into account in the design of a passive crossover but that not what we're talking about here.)

The next link in the chain is the amplification. Even very inexpensive modern receivers have high frequency extension typically measuring ~-3dB@50KHz. So we go to the next weakest link, the loudspeaker. (Yes, there can be a problem with low priced receivers driving low frequencies but here we're concentrating on the high frequency response.)

Next, to loudspeakers. (We'll leave the room out of this equation for the purposes of this discussion). Speaker designers will try to always design for high frequency extension past (at least) 20KHz. In audio, the human ear is quite sensitive to high Q resonances with sufficient bandwidth. That is, any deviation from a smooth transition of frequencies from one octave to another as we go up the frequency scale is noticeable. It stands out. Similarly when we get up toward the end of the high frequency extension capabilities of a speaker system a sharp drop-off in frequency can also noticeable. Note the hubbub caused by the first CD players in the early eighties. These first players had no oversampling but they did have the infamous CD "brickwall filter" at 22.05KHz. At that time I could hear to 18KHz flat but I was still "aware" of what this sharply truncated frequency response was doing to the overall sound. Note: this is comparing flat hearing response to a flat filter before roll-off. No boost.

Another problem has been in taming the oilcan resonance of metal dome tweeters. Early examples could have a, fairly wide bandwidth, +8dB peak at just past 20KHz and you damn well knew it was there even though technically you supposedly couldn't hear it. Again it is the case of how much deviation from flat before an out-of-hearing-range set of frequencies becomes "noticeable".

Given enough boost and a wide enough bandwidth, even outside the audio band a lot can be "heard" or "sensed". Over extended listening I've found loudspeakers which might exhibit these high Q, wide bandwidth anomalies, up to 25KHz to be quite audible, in the sense that they get annoying after extended listening.

Understand here that "hearing to 18KHz" means that you're hearing flat to 18KHz after which your hearing acuity drops off, usually at a sharp rate. It does not necessarily mean that you can't hear anything past 18KHz, it just means that, like a speaker at the end of it's frequency extension, your hearing will be a given amount of dB down. If the downward slope is not too severe a high Q, wide bandwidth set of frequencies may be noticed. Some people like this effect, some don't.
 
R

Richard Black

Audioholic Intern
<<The amp will respond to the voltage at it's terminal, not it's current. How does the amp keep the driving force, which is current, consistent to within 5 uSec, at the load?>>

By Kirchhoff's Law!
 
J

jneutron

Senior Audioholic
I had hoped to come back this year to find a response to this assertion:
Pat said:
A 5 microsecond delay has only been detected with any degree of certainty, under controlled laboratory conditions, within the approximate 3500Hz region where the ear is most sensitive.
As this assertion contradicts the ability of humans to locate the direction a sound is coming from at the 25 milliradian level... and contradicts entirely the results I posted from Nordmark. Hence, my desire to be able to read the paper describing the "lab conditions" you were referring to.

It is quite disconcerting to see one who obviously understands the engineering aspects of audio making broad statements such as that, without an in depth analysis of the lateralization capabilities of humans. I consider your assertion as unsupported, and clearly incorrect. course, I'm also miffed that you chose to ignore my calling your statement to task... :rolleyes:
Pat said:
I was simply trying to present a viewpoint on wire from that of a speaker designer regarding wire's far superior frequency extension to the point wherein, given a decent cable, speaker designers usually don't put wire back into the design equation. ......
For most audio applications (all of mine included), there is absolutely no need to consider putting the wire back into the equation...once the resistance is known, and the amp is happy with the wire's lumped parameters, you're done.
Pat said:
So unless there is something horribly amiss with the design of the cable it will have no audible effect within the frequencies which a speaker can operate.
Again, for all the normal apps, that is entirely correct. However, you are only considering flat freq response, and gross phase shift of one channel at a time..for binaural reproduction, it is hugely erroneous.
Pat said:
Over extended listening I've found loudspeakers which might exhibit these high Q, wide bandwidth anomalies, up to 25KHz to be quite audible, in the sense that they get annoying after extended listening.
I have also "sensed" sounds at 25Khz. I use ultrasonic welders here to bond superinsulation layers, and when the welder is activated, all background noise diminishes..this being a response to the ear detecting the U/S energy and automatically ranging it's "gain" down. It can be unsettling when that occurs, it's not unlike walking into an anechoic chamber.

It will take time...but eventually, the engineering community will begin to understand the nature of human lateralization capability, and the temporal accuracy required of the low impedance circuitry to maintain a virtual image that is constructed by a two channel system. Just look at the graph I provided....humans can detect 5uSec ear to ear delays at 500 hz...Show me even one example where anyone on the planet constructed an amplifier that was capable of 5uSec temporal accuracy at 500 hz..or even, tried to measure it..I am cognizant enough to know how poorly everyone tries to measure low impedance currents to know that nobody has done it accurately..because they did not understand the requirements..

When lateralization is properly considered, you will begin to appreciate why I consider all poweramp construction I've seen to date to be, shall we say, "lacking?" It makes me chuckle to see how sloppily designed current state of the art electronics is when it comes to e/m theory and practice. It's all pretty much crap when it comes to maintaining temporal accuracy at the microsecond level...BUT NONE OF THE DESIGNERS UNDERSTAND THIS..that will change

And, one reason I do not worry about this "imaging thing" in my own listening... none of my listening will be done by sitting in one chair, located in the only sweetspot in the room. Anyone who chooses to listen in that fashion is welcome to their enjoyment.. my research will benefit them by actual engineering, by lowering the cost of their product, and by providing rules for design of audio product that addresses what they can hear..
I have no desire to flail away at random, trying this dielectric, or that sandbag, or any of that crapola being touted as "high end" garbage by self serving, arrogant guru's who demonstate very little education..I also will not accept handwaving statements without basis in fact from well educated engineering types either.

My own testing and R&D will take in excess of 3 to 4 years, but I am a patient man..one with his my own life and that of his loved ones as the priority. If I were interested in deriving income from this, it would be faster...

I post what I am doing, in the hopes that others will actually get off their chairs and start real analysis..that will hasten our understandings..and hopefully, lessen the unsupported statements from otherwise very intelligent people, and shut up the arrogant, un-knowledeable ones...

Cheers, Happy New Year..

John
 
Last edited by a moderator:

plhart

Audioholic
"Nordmark tested and established the ability to discern 5 uSec left/right lateralization from about 500 Hz to in excess of 12 Khz."

Last time I looked 3500 Hz was between 500Hz & 12KHz.


"5 uSec is the timeframe required to discern the location of a point source to within 2 to 3 inches, with the source 10 feet away."

Exactly my point of reference for the original statement. I usually remember arcane numbers such as this from the perspective of a listener. Within the Audioholics forums, we speak, as simply as possible, to other listeners.

"Jan wrote his paper back in 1976, published in JAS...I have it, should Pat wish to peruse it..so it would appear that Nordmark does not agree with Pat...hmmm.."

Why does it not agree? See the two answers above. I was answering with a generalization pulled from memory. So no I can't tell our readers the exact source. Nor do I wish to take the time to look it up.

Even when we write articles we almost never list sources. Audioholics is a website with an audience who generally are not in the industry themselves. Our surveys have shown us that our typical readers are most interested in what they should buy for their own home systems. So product reviews come first. Followed by articles which we feel is of the most importance to our readers in helping them put a product review (and how that product might sound in their home) in the most easily understandable perspective.

A good for instance is the Audyssey review I just did with Chris Kyriakakis and Tom Holman. Yes, Chris did supply me 11 highly technical articles which have been published in places like the JAES or IEEE on the various aspects of the Audyssey technologies. But we at Audioholics know our reader base and what interests them most is a) how well does the technology work, b) how much does it cost and c) when will it be available.

"It is quite disconcerting to see one who obviously understands the engineering aspects of audio making broad statements such as that, without an in depth analysis of the lateralization capabilities of humans. I consider your assertion as unsupported, and clearly incorrect. course, I'm also miffed that you chose to ignore my calling your statement to task... "

My own testing and R&D will take in excess of 3 to 4 years, but I am a patient man..one with his my own life and that of his loved ones as the priority. If I were interested in deriving income from this, it would be faster... "

So you're saying Audioholics should do an in depth analysis on the lateralization capabilities of humans because we are deriving income from running our site? Sorry, but I think the other 99.9% (beware! Unsubstantiated %!) of our readers would vote otherwise.

"It's all pretty much crap when it comes to maintaining temporal accuracy at the microsecond level...BUT NONE OF THE DESIGNERS UNDERSTAND THIS..that will change"

Please do let us know when your research is complete. We'd love to report on unearthing new answers to problems of the temporal accuracy of low impedance circuitry that has eluded the rest of the world's designers. Especially if those findings and their solutions ellicit a profound benefit to the system's sound quality when played back in typical home environment.
 
Last edited:
J

jneutron

Senior Audioholic
Where did my post go??? Did I lose it??

Guess I messed up...oh well...here it is again..

plhart said:
"Nordmark tested and established the ability to discern 5 uSec left/right lateralization from about 500 Hz to in excess of 12 Khz."
Last time I looked 3500 Hz was between 500Hz & 12KHz.
You stated:
A 5 microsecond delay has only been detected with any degree of certainty, under controlled laboratory conditions, within the approximate 3500Hz region where the ear is most sensitive.
There is a significant difference between "only detected at about 3500 hz in controlled laboratory conditions..." and humans documented ability to discern lateralization over 500 to 12K..

"5 uSec is the timeframe required to discern the location of a point source to within 2 to 3 inches, with the source 10 feet away."
Exactly my point of reference for the original statement. I usually remember arcane numbers such as this from the perspective of a listener. Within the Audioholics forums, we speak, as simply as possible, to other listeners.
I didn't get that statement..your origional statement that it has only been seen in the lab is not correct..
"Jan wrote his paper back in 1976, published in JAS...I have it, should Pat wish to peruse it..so it would appear that Nordmark does not agree with Pat...hmmm.."

Why does it not agree? See the two answers above. I was answering with a generalization pulled from memory. So no I can't tell our readers the exact source. Nor do I wish to take the time to look it up.
Hmmm, I wish you could remember the source, and whether it was an actual peer reviewed paper, or just some floob written by some marketing guy....oh well. If you remember, I'd like to know...(they say the memory is the second thing to go.. :eek:

pat from article said:
The bottom line here is that the beginning statement, though it may be technically true, is ludicrous by nature of its construction which may and I fear does lead most of the hangers-on of such debates toward yet another thread of hopefulness in their trivial cable pursuits. We know what constitutes a "good" cable for any given length.
As I stated...for darn near all applications, you are correct, and I agree with you, as well as the bulk of the readers you are targeting.

Note that Gene stated phase shift at 20 Khz being about 5 uSec, and I concur that at 20Khz, we ain't gots the capability..nonetheless, we certainly have more capability in this regard than your "about 3500 hz" statement.
"It is quite disconcerting to see one who obviously understands the engineering aspects of audio making broad statements such as that, without an in depth analysis of the lateralization capabilities of humans. I consider your assertion as unsupported, and clearly incorrect. course, I'm also miffed that you chose to ignore my calling your statement to task... "

My own testing and R&D will take in excess of 3 to 4 years, but I am a patient man..one with his my own life and that of his loved ones as the priority. If I were interested in deriving income from this, it would be faster... "

So you're saying Audioholics should do an in depth analysis on the lateralization capabilities of humans because we are deriving income from running our site?
No, I'm saying that it will take me a significant amount of time to do these tests, because I do not derive income from it, so do not consider this testing to be a priority in my life.. The last I looked, the words "my own testing and R&D" meant....my own..
Sorry, but I think the other 99.9% (beware! Unsubstantiated %!) of our readers would vote otherwise.
I did not expect this kind of demeanor from a moderator.
"It's all pretty much crap when it comes to maintaining temporal accuracy at the microsecond level...BUT NONE OF THE DESIGNERS UNDERSTAND THIS..that will change"
Please do let us know when your research is complete. We'd love to report on unearthing new answers to problems of the temporal accuracy of low impedance circuitry that has eluded the rest of the world's designers. Especially if those findings and their solutions ellicit a profound benefit to the system's sound quality when played back in typical home environment.
As I have stated to Gene quite a few times, I will indeed present articles on this testing and my results as I proceed, but I do not wish to write one yet as I've not enough solid factual results to support a paper..

I'm still having trouble measuring the inductance of my newly built load resistor... my meters at work are unable to measure below a nanohenry. I have to use a new tact, that of pushing about 100 Mhz into the darn thing, and measuring the current lag. But I have to build an amp capable of driving that frequency into a four ohm load.

Measurement of low impedance by "the rest of the world's designers" is quite interesting...most have no clue as to B dot error... most are unaware of external loop considerations...most hang their hat on either caddock ceramics or dale bifilar units...my goodness, pure garbage at high current slew rates..

If you wish more details on how badly the "world" measures high current slews, I would be more than happy to elaborate. It's actually very easy to understand, and scales in frequency from the 5 to 10 tesla superconducting magnets I am involved with..we many times deal with slews of 1000 to 10 Kiloamps per second, and the delta flux trapped raises holy heck with microvolt measurements..

I make the assumption that your jab at me was simply one borne of not understanding what I am about..I base my understandings on science, not floobydust..

However, if it is Gene's wish to allow you to play your silly little word games as you just did with me, then I certainly will have to re-think future contributions to this site.

What I am testing will have little effect on home playback, as most everyone flat out refuses to sit in the very small sweetspot for the virtual image, but it will have a large impact on everyone's understanding of this back and forth fighting between believers and non believers....it's different requirements..

What I do at work is well beyond your understandings..not your fault, it is shall we say, esoteric? But I would like to apply what I know to audio..without being attacked by someone with a quick trigger finger..

Sometimes, people get in the way..
Cheers, John

I'll e-mail this response to Gene..so he has a heads up..
 
Last edited by a moderator:
mtrycrafts

mtrycrafts

Audioholic Slumlord
I am still trying to understand how you will get 5us or more delay between the right channel and left channel that will cause the soundstage to shift noticably?
And, just because there is group delay in the cable itself between the lows and highs, how that alone will cause soundstage drift unless the cables on each side are so different to cause that 5us or more difference for that same frequency between the two sides.
Then, will the 5us delay with music be noticable as easy as with test tones?
Certainly JND studies show that music will mask level differences until a much higher difference exists than with test tones. Perhaps the same applies to soundstage issues?
How will one know if the location of the instrument is where it is supposed to be?
I am just trying to understand :)
 
J

jneutron

Senior Audioholic
mtrycrafts said:
I am just trying to understand :)
Mee toooo... :D
mtrycrafts said:
I am still trying to understand how you will get 5us or more delay between the right channel and left channel that will cause the soundstage to shift noticably?
And, just because there is group delay in the cable itself between the lows and highs, how that alone will cause soundstage drift unless the cables on each side are so different to cause that 5us or more difference for that same frequency between the two sides.
Then, will the 5us delay with music be noticable as easy as with test tones?
Certainly JND studies show that music will mask level differences until a much higher difference exists than with test tones. Perhaps the same applies to soundstage issues?
How will one know if the location of the instrument is where it is supposed to be?
Hi Mtry..Happy New Year

All excellent questions..I'll give some answers, and show some research holes..via modelling..bear with me for a while..I put it here rather than private e-mail, as Pat deserves an explanation, I was a little abrupt...

Assume an impulse source in space,10 feet from the listener producing a spherical pulse wavefront (pulse for model simplicity, we can't localize a pulse in reality.). It reaches each ear based on distance.. We define the direction based on that time difference.....lateralization.
The sensitivity of discrimination will be highest directly in front, lowest at 90 degrees..a cosine based function.
By tests, it will be shown (the first segment of my work) that we can discern direction in a gaussian like manner...the better one is, the sharper the distribution (lower the sigma), the more accurately one can localize. If, 50 uSec is the best one can do, then sigma (the uncertainty of direction) will be in the 15 to 30 inch wide range. By practice, I make the assertion (entirely unproven at this point), that one will be able to reduce sigma. Localization to a span of 5 inches will require about 5 uSec capability. Humans can discern timing changes down to about 1.5 uSec, but Nordmark's work does not demonstrate that the inverse, or directivity capability, can be extended down to half an inch or so at ten feet. I make the broad assumption that 2 to 3 inches is the best one could hope to do.
Now....stereo reproduction...
Two point sources, 10 feet apart. simultaneous impuses. each ear receives it's signal at the correct time....but then, each ear hears the other's signal. This is NOT how we were meant to localize a source...The brain "sees" the virtual image produced by each ear receiving it's info, and then it "sees" the right and left image that is actually occurring as a result of two sources. The fact that we can image just the intended, center image, is a wondrous thing, while not being annoyed by the side ones (I'll refer to those as sideband images).
Since our localization sensitivity is gaussian, the sweet spot will be a gaussian "blurred" point in space. It will be roughly diamond shaped, with a gaussian profile along the two source vectors. It will also NOT be very sensitive to head rotation (the cos theta function), as we are worried about the relative image location, not absolute. However, it will be sensitive to translation from the geometric center of the spot. As one leaves the spot, the sensitivity to directionality defined by the binaural signals will diminish. (the image will become less apparent). Once the head is far enough away from the sweet spot, all image information is gone. Once the image is gone, all that is left is the fidelity of the sources. What has to be pointed out, is that if one can "learn", by practice, to sharpen one's lateralization capability, their sweet spot will become smaller (don't forget, the spot is defined by the gaussian lateralization sensitivity..) I surmise that Bose widened that sweet spot through the direct/reflecting technology...making the image larger when in the spot as an unintended side effect.

The standard arguments against cables being a concern, is centered about the fidelity of the signal. Outside of the sweet spot, this is the only concern, and the lumped parameters of the wires, especially L and C, are unimportant. So, for the vast majority of people who are not interested in sitting in a small area, the wires don't make a darn difference..For speakers that are not rigorously time coherent, it doesn't make as much of a difference..for a pair of old 901's, it also makes no difference. I consider it silly to drop lots of money into crazy cables of really low inductance and resistance, if you are not concerned with lateralization...So for the bulk of the people, what is professed on this site, is entirely accurate. It's the fidelity that matters.

Variation in timing of lateralization cues (yes, I finally got to your questions)?

First, what is the cue? zero crossing, as Nordmark hypo's? Slew rate based, as I hypo?...All that is known is that it is a 1 to 10 uSec based thing..incredibly fast, considering we hear about 20K max..

If you look at the low impedance system as a whole in light of this speed level, many things become apparent..

1. Amps are not designed for those speeds..we are implying bandwidths of a Megahertz.
2. Current slew rates in the ampere per microsecond regime create huge mag field rate of change, this will couple to loops in the locale, as well as resist efforts of the amp to control the output current.
3..Coupling of the feedback loop within the amp chassis, to the output current mag flux is unavoidable..in fact, it is not considered a part of good design yet. Why should it be? It has not been considered..
4. Measurement of these slew rates is not easy. Loop coupled voltage intercept confounds measurement. Nobody I am aware of takes care to avoid that error term, either in measurement, or in the physical layout of the amp.
5. Shielding is incapable of fixing the loop coupling..star grounding is useless also. External ground loop, as in line cords, is also an issue.
6. Given Nordmarks data, that lateralization is sharper with jitter, the amplitude of the system (spl, cone displacement) will also change the imaging.

With all these factors capable of affecting the overall output slew rate (note, not the absolute slew rate per se, but the error in absolute slew rate), hence image clarity, the question is now one of, what magnitude is important and discernable..

I point out that the speed regime is almost three orders of magnitude faster than currently accepted engineering practices for audio reproduction. This is because the imaging requirements are there...

I point out that all the tests I have seen are incapable of accurate results at the 1 uSec level. I always require my test equipment be at least an order of magnitude better than what I am trying to measure, this is why I am making my load resistor really non reactive..I finally got a reading on it, but it is incorrect. At 100Khz, 256 point integration, the meter says 2 nanohenries...that is incorrect, as the coaxial feed structure to it is 5 inches long, Inner conductor OD of .650, Outer conductor ID of .835, DC of 1 (air)...and should be 7 nanohenries inductance plus the resistor array inductance. Clearly, there is a problem..I think the meter is seeing the output wire pair.

Without the ability to see the delays using real electrical tests, one is stabbing in the dark...amp designers are forced to "listen to ther ears", as they don't even understand the timing issues, and use random guesswork in their design..Wires, same thing...line cords, same...tweak, tweak, tweak...what a lousy way to work...

I refuse to tweak...

To answer your question....both amp channels are in the same chassis, both are trying to control a low impedance load that is miles away (the wire inductance is there)..both channels are, internally, broadcasting lots of high slew rate magnetic fields that the feedback loop is intercepting...and to top it off, we expect the thing to play well with a random signal.

It will be a long road, as I can't find any lateralization studies that I need to proceed, so I am relegated to designing my own...sigh..The e/m field theory stuff is the easy stuff, I'm sorry to say...I have lots of test experience in the 4 giga-amp per second regime and 1 ohm impedance, so these speeds won't be a problem for measurement. And, I've "dabbled" with higher current stuff.

All of what I speak of here, is testable...I will not handwave...results will be demonstrable, repeatable, and peer reviewed...nothing other than that will be acceptable..

Cheers, John.

To all the others: I apologize profusely for putting you to sleep..to the real geeks, I apologize for the oversimplifications I present..

Gene: I love this site, the editing capabities and jpeg support are absolutely the best features here..thanks.
 
Last edited:
R

Richard Black

Audioholic Intern
<<To answer your question....both amp channels are in the same chassis, both are trying to control a low impedance load that is miles away (the wire inductance is there)..both channels are, internally, broadcasting lots of high slew rate magnetic fields that the feedback loop is intercepting...and to top it off, we expect the thing to play well with a random signal.>>

Yes, but you can still get total nonlinear distortions below -80dB, measured at the business end of the speaker cable with a dummy load, and the only reason the distortion goes up when you connect a real speaker is that the speaker itself generates distortion which is not necessarily damped that far down by the effective source impedance of amp plus cable. And linear distortions are small too, few tenths of a dB amplitude and few us in time. So, er, what are we missing?

Richard
 
J

jneutron

Senior Audioholic
Richard Black said:
quoting me:To answer your question....both amp channels are in the same chassis, both are trying to control a low impedance load that is miles away (the wire inductance is there)..both channels are, internally, broadcasting lots of high slew rate magnetic fields that the feedback loop is intercepting...and to top it off, we expect the thing to play well with a random signal.

Yes, but you can still get total nonlinear distortions below -80dB, measured at the business end of the speaker cable with a dummy load, and the only reason the distortion goes up when you connect a real speaker is that the speaker itself generates distortion which is not necessarily damped that far down by the effective source impedance of amp plus cable. And linear distortions are small too, few tenths of a dB amplitude and few us in time. So, er, what are we missing?

Richard
Ah, this is where we are on different wavelengths...

I am not talking about non-linear distortions at all..

I am talking about temporal distortions. Differences in arrival times based on the addition (or subtraction) of cos theta terms.

If you add .1 cos theta to sine theta, an FFT will find ZERO distortion, and the amplitude will not change appreciably. But yet, the waveform has changed significantly....with respect to lateralization..

It is entirely possible to screw up the audio signal so much that it is not possible to "visualize" an image in space, and yet still have zero distortion with respect to FFT analysis, or even homebrew software that uses <sup>sin(x)</sup>/<sub>x</sub> convolution.

In fact, I am unaware of any math package that can examine both channels simultaneously under full low impedance load, compare and time correlate the two output channels with each other, and then compare that to the time correlation of the two inputs. Checking the inputs is necessary, as any package would not figure out how the inputs "wanted" to be, vs the outputs.

We have that math package embedded within our brain...but do not know how to apply the algorithms within test hardware.

The speaker, per se, is not introducing distortion...it is presenting a wildly reactive, linear impedance. How the amp responds to this is a major concern... especially how it responds on the other end of that inductor/capacitor thing (cable). It is a trivial thing to measure the overall output voltage distortion, at the amp terminals and the speaker terminals (of course, the force a voice coil exerts is current based, not exactly voltage based)...the resultant air pressure distortion in front of the speakers (although it is impossible to measure a 10 to 20 uSec variation in air pressure as that is a ridiculously small distance at the speed of sound, and at high spl....is far less than the cone excursion itself).

Of course, a speaker model does not include any sound reflection temporally, other than reactive terms measured using sine excitation.

Speaker nonlinearities are usually due to excessive "force"...If your excursions leave the gap, the surround goes large signal, the cone acceleration is sufficient to cause enough air compression non linearities, or the magnetic circuit is too compliant, allowing the audio to modulate the gap flux. I don't address that one yet..

So, no...I am not speaking of voltage distortions that can be measured by standard equipment, the DPS algorithms embedded within the machines are not sensitive to temporal distortions that trash lateralization imaging.

Closing note: a power amp runs four quadrants(except for class A)...Pos and neg voltage...pos and neg current. For two of them, the amp is "delivering power", for two the amp is absorbing. The ability of the amp to control the output is defined as it's damping factor...higher is better. The damping factor is different for absorbtion and delivery..The damping factor is actually a complex number, not the resistive one we use..

The feedback path of an amp is susceptible to B dot pickup within the amp. For supply rail currents, the feedback error will be polarity dependent...in other words, the error component caused by the positive rail current magnetic flux will be different from the error caused by the neg rail current flux. And the errors are cos derived.

When one considers lateralization, one realizes how poor standard amp layout practices really are. The state of the art in testing as of now...considers it NOT..

The first task at hand is to learn what lateralization is, the criteria for measuring it, and then applying engineering understanding to correcting the sloppy practices...

Personally, none of this is of any significance to my listening pleasure. It is a voyage of discovery for me.. As for my personal sound systems, this site's recommendations are all I need.

Cheers, John
 
Last edited by a moderator:
Swerd

Swerd

Audioholic Spartan
jneutron said:
We have that math package embedded within our brain...but do not know how to apply the algorithms within test hardware.

The first task at hand is to learn what lateralization is, the criteria for measuring it, and then applying engineering understanding to correcting the sloppy practices...Cheers, John
I have been reading this thread with some interest, but not often understanding it. There are two aspects to this question about sound localization, the physics of sound and the biology of hearing. I am not at all familiar with this subject so I did a quick literature of what review articles have been recently published on the biological side of the question. Listed below are four articles with abstracts that may be useful for you. If nothing else, they might be useful as a source of basic older references on what info is known and generally accepted on this subject. Some of these papers are more concerned with how the brain processes the info, rather than what we can and cannot detect. After a quick read, I think the Bernstein article might be the most useful.

Email me if you want any of these full articles in pdf form.

Banks, M. S. (2004). "Neuroscience: what you see and hear is what you get." Curr Biol 14(6): R236-8.

The brain receives signals from a variety of sources; for example, visual and auditory signals can both indicate the direction of a stimulus, but with differing precision. A recent study has shed light on the way that the brain combines these signals to achieve the best estimate possible.


Bernstein, L. R. (2001). "Auditory processing of interaural timing information: new insights." J Neurosci Res 66(6): 1035-46.

Differences in the time-of-arrival of sounds at the two ears, or interaural temporal disparities (ITDs), constitute one of the major binaural cues that underlie our ability to localize sounds in space. In addition, ITDs contribute to our ability to detect and to discriminate sounds, such as speech, in noisy environments. For low-frequency signals, ITDs are conveyed primarily by "cycle-by-cycle" disparities present in the fine-structure of the waveform. For high-frequency signals, ITDs are conveyed by disparities within the time-varying amplitude, or envelope, of the waveform. The results of laboratory studies conducted over the past few decades indicate that ITDs within the envelopes of high-frequency are less potent than those within the fine-structure of low-frequency stimuli. This is true for both measures of sensitivity to changes in ITD and for measures of the extent of the perceived lateral displacement of sounds containing ITDs. Colburn and Esquissaud (1976) hypothesized that it is differences in the specific aspects of the waveform that are coded neurally within each monaural (single ear) channel that account for the greater potency of ITDs at low frequencies rather than any differences in the more central binaural mechanisms that serve these different frequency regions. In this review, the results of new studies are reported that employed special high-frequency "transposed" stimuli that were designed to provide the high-frequency channels of the binaural processor with envelope-based information that mimics waveform-based information normally available only in low-frequency channels. The results demonstrate that these high-frequency transposed stimuli (1) yield sensitivity to ITDs that approaches, or is equivalent to, that obtained with "conventional" low-frequency stimuli and (2) yield large extents of laterality that are similar to those measured with conventional low-frequency stimuli. These findings suggest that by providing the high-frequency channels of the binaural processor with information that mimics that normally available only at low frequencies, the potency of ITDs in the two frequency regions can be made to be similar, if not identical. These outcomes provide strong support for Colburn and Esquissaud's (1976) hypothesis. The use of high-frequency transposed stimuli, in both behavioral and physiological investigations offers the promise of new and important insights into the nature of binaural processing.


McAlpine, D. and B. Grothe (2003). "Sound localization and delay lines--do mammals fit the model?" Trends Neurosci 26(7): 347-50.

The current dominant model of binaural sound localization proposes that the lateral position of a sound source is determined by the position of maximal activation within an array of binaural coincidence-detector neurons that are tuned to different interaural time differences (ITDs). The tuning of a neuron for an ITD is determined by the difference in axonal conduction delay from each ear--the so-called "delay line" hypothesis. Although studies in birds appear to support this model, recent evidence from mammals suggests that the model does not provide accurate descriptions of how ITDs are encoded in the mammalian auditory brainstem or of how ITD-sensitive neurons contribute to mammalian sound localization.


Schnupp, J. (2001). "Of delays, coincidences and efficient coding for space in the auditory pathway." Trends Neurosci 24(12): 677-8.

To localize a sound source in space, the auditory system detects minute differences in the arrival time of a sound between the two ears. It has long been assumed that delay lines and coincidence detectors turn these time differences into a labelled line code for source position, but recent studies challenge this view.
 

newsletter
  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top