Cables Make a Difference

mtrycrafts

mtrycrafts

Seriously, I have no life.
jneutron said:
Hi mtry, how are you?

I need to embellish a tad.

There are three mechanisms of adaptation to aural stimulus..

1. Automatic gain ranging..the ability of our hearing mechanism to adjust to the level of the sound. One can easily imagine sitting in a quiet room, listening to soft music, hearing all the details...then, going outside and revving the Harley for five minutes...then, going back into the room, listening to the exact same music at the exact same level, but finding that it takes half an hour for your hearing to become sensitive again to that which you previously enjoyed.. This is also what happens with visual acuity vs light intensity..

2. Frequency based gain ranging..the annoying thing that happens when one sets an eq 15 khz band just a bit higher to make the highs more brilliant...then, an hour later, it isn't as sizzling as it was when you first adjusted it..you have adjusted to the change in level a bit..

3. Localization stimulus interpretation. In the natural environment, we use timing delay and intensity levels to interpret where a sound source is in the space around us. When we are presented a horribly flawed soundfield, as stereo speakers by design project, the human brain is forced to adapt to that incorrect stimulation to "figure out" where the audible images are coming from. Modification of that flawed soundfield forces the brain to adjust it's interpretation algorithms, to re-establish localization.

It is number 3 that I point out is not understood in the world of audio..

So far, I have seen very little attention paid to the complex relationship between those two items and stereoscopic reproduction.

To all who read this..an experiment.

First, you need a soundcard that has only one D/A converter. The output section must mux the analog output. This forces the channels to have an 11 uSec delay..my soundcard has a built in 11 uSec delay on the left channel.

With headphones, listen to either a wave file or directly off a disk...(not an mp3 if you can avoid it, as losses can get in the way..)

Is the image centered??? Mine isn't. It is slightly to the right, an artifact of the 11 uSec delay..(yes, when I reverse the phones, the image does move to the other side..and yes, when I hot wire the channels together to force mono, it is in the middle regardless of headphone orientation.)

To center the image, I have to use the balance control. When I do this, the image indeed shifts to the center, but there is a problem...when I do this, the image goes all to "heck". It becomes less focussed...I use the mouse to adjust it...click on the control, turn your head away, sweep the control side to side, both extremes, then bring it back to center the image..suprise, it always requires the same level shift..and, all the frequencies DO NOT center at the same time..

This is because we do not have the same sensitivity to IID shifts across the audio band. In point of fact, both ITD and IID sensitivity vary across the entire range of human hearing, and they do not track..meaning, it is not possible to use IID to correct for ITD errors on more than one frequency band simultaneously.

Given the fact that a 1 foot diameter source localization error band, center stage ten feet away...is bounded by IID levels of .06dB, and ITD levels of about 5 uSec, do you really think anybody is looking with test instrumentation...what we can hear???

From what I have read so far, from the experts..they do not have much more than a rudimentary understanding of localization issues.

Dunlavey: hmmm..

If I trick an individual into believing that he has seen a ghost...have I proven that ghosts do not exist???

I expect far more scientific work than that. All he has proven is that the "test instruments" can be fooled.

As for papers...hmm..several things are required..

1. A paper explaining the mathematics of localization of a sound source, with emphasis on differential ITD and IID as it applies to image reconstruction. This is consistent with radar and sonar theory. It will establish the boundary math for reconstruction, ie, it will establish the constraints necessary for reconstruction spacial fidelity. For example, to establish a virtual image 10 feet away, on axis, to within one foot, interchannel ITD must be less than 5 uSEc deviant, and IID less than .06 dB deviant. I could plant the 2 axis plots here, but I do not believe anyone will understand them. (my apologies if that is an incorrect statement, I am working to create 3-D graphs which will encompass all of it.)

2. A paper establishing the susceptibility of present electronics to these levels of deviations..this paper has to establish the test methods required to spot these susceptibilities as well as the equipment required, including the equipment specifications. (note that current state of the art load resistors are incapable of presenting the load parameters necessary to do this). I've explained some of these susceptibilities and test methods on forums, and with various individuals over the years...nothing has been done..

3. A paper which properly tests human hearing capability with respect to well defined differential ITD and IID stimulus, over the entire range of human hearing...until these parameters are controlled up to and including the actual cones, the testing done will be meaningless.. garbage in, garbage out.

All of this, is of course, unnecessary for the the vast bulk of the human race, as there is a very small percentage of humans who even care about the coherent soundstage image to that degree.. I, for example, just enjoy the artist, or the movie, without worrying about confining myself to one point in space..others choose to do so, and that is just fine..

Cheers, John

PS..as you can see, the final proof of my assertions takes a long path, requiring advancing human audibility understandings and electronic test methods..I always welcome the "show me proof" stance you take..this is how it should be..
Hey, I am doing well enough. Good to see you back here too, not just battling JC. That is a lost causei It was years ago when he first appeared on the scene :D

Thanks for the added explanations.

Yes, sitting in a quiet room to quiet music then exposure to ear splitting noise will cause a time lag until you will readjust to that quiet room. How long is that? What happens with dynamic music? What happens when you try to differentiate the same music reproduced through two components of whatever, amp wire, preamp, etc, to find audible differences? Then you also introduce acoustic memory to recall what you heard 10 seconds ago?
I am very familiar with the light aspect and nioght vision. Same applies there too though.

As to the IId and ITD, I think you are after the fundamentals and JNDs due to this, right? But, in my limited knowledge base and my hard head, I don't yet see that this must be conquered before you can try to differentiate audible differences between any components, right? Dunlavy just showed how easy it is to fool people, or, when making changes, no positive outcome was made. Whether he or the listeners understand the two mechanisms of localization is irrelevant. If the stimulus had differences large enough to cause differences, that would have been discovered by the listening test? But I am still in my infancy on this :D And, I am indeed that 'show me' kind of guy. But, I don't matter in the scheme of audio world. Just a insignificant ripple :D

How is your experimentation going? Any closer to the end of the tunnel? ;)
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
pikers said:
mtrycrafts said:
I established it by preferring one over the other. If there wasn't a difference, there would be no preference...

I see, you did, after all, establish a difference, then you made a preference pick? Is this what I need to understand?

If so, your protocol is so flawed and unreliable, that you better just make a preference based choice and leave the differences out. You have not establihed that there are any. But, you are happy, that is important. Truth, facts, reality, all irrelevant.

Thanks for your explanation as I am pretty sure I follow you now.
 
pikers

pikers

Audioholic
mtrycrafts said:
pikers said:
I see, you did, after all, establish a difference, then you made a preference pick? Is this what I need to understand?

If so, your protocol is so flawed and unreliable, that you better just make a preference based choice and leave the differences out. You have not establihed that there are any. But, you are happy, that is important. Truth, facts, reality, all irrelevant.

Thanks for your explanation as I am pretty sure I follow you now.
Yes. I heard a difference, and established a preference based on the difference. In other words, I chose the ones that weren't harsh. If that's flawed (keep in mind I live in a house, not a science lab :rolleyes: ), then I guess that's what it is.
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
pikers said:
mtrycrafts said:
Yes. I heard a difference, and established a preference based on the difference. In other words, I chose the ones that weren't harsh. If that's flawed (keep in mind I live in a house, not a science lab :rolleyes: ), then I guess that's what it is.

You don't need a science lab. And your flawed listening will result in unreliable choices. You are biased and picked one based on other than sonic issues which is fine but you made unsubstantiated claims too. Rather simple.
 
pikers

pikers

Audioholic
mtrycrafts said:
pikers said:
You don't need a science lab. And your flawed listening will result in unreliable choices. You are biased and picked one based on other than sonic issues which is fine but you made unsubstantiated claims too. Rather simple.
Incorrect. You have an issue with marketing strategies, but that's another thread.

Bias to me is what performs better. For you to assert otherwise is typical of your average scientist; stick to your theory until facts come wheeling in and run you over.

BTW - It's flawed if it doesn't work. Since it does, it's only flawed since it doesn't meet with your definition and desire to keep your head in the sand.
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
pikers said:
mtrycrafts said:
Incorrect. You have an issue with marketing strategies, but that's another thread.

Bias to me is what performs better. For you to assert otherwise is typical of your average scientist; stick to your theory until facts come wheeling in and run you over.

BTW - It's flawed if it doesn't work. Since it does, it's only flawed since it doesn't meet with your definition and desire to keep your head in the sand.

Bias is just that, nothing more. If it affects decision making, it is a flaw. You seem to have plenty. Trying to educate you is a waste of time. Your mind is closed to reality. So, enjoy your imagined world.
 
J

jneutron

Senior Audioholic
mtrycrafts said:
Yes, sitting in a quiet room to quiet music then exposure to ear splitting noise will cause a time lag until you will readjust to that quiet room. How long is that? What happens with dynamic music? What happens when you try to differentiate the same music reproduced through two components of whatever, amp wire, preamp, etc, to find audible differences? Then you also introduce acoustic memory to recall what you heard 10 seconds ago?
I am very familiar with the light aspect and nioght vision. Same applies there too though.
My example of light was to show how the human systems adapt to the varying stimulus..for night visual acuity, it can be hours to recover to full night capabilities.

The standard JND and DBT tests do not allow for that recovery period when it comes to localization visualization..remember, we are trying to fool the brain into believing a soundsource is where none exists...that is done with artificial signals.

When the relationship between those artificial signals changes, the brain must adapt to a new interpretation of the signals..that takes time. If it takes more time than is allowed for the subject to adapt, then the test is useless.

What is known about localization capabilities and adaptations is very little.
mtrycrafts said:
As to the IId and ITD, I think you are after the fundamentals and JNDs due to this, right? But, in my limited knowledge base and my hard head, I don't yet see that this must be conquered before you can try to differentiate audible differences between any components, right?
If the differences can be adapted to quickly by the listener, we concur..

If the differences between components are ONLY ITD and IID variations, the entire test protocol system used for the last coupla decades is worthless. Actually, worse than that, as they produce nothing but nulls.
mtrycrafts said:
Dunlavy just showed how easy it is to fool people, or, when making changes, no positive outcome was made.
In it's own right, that is valuable..it is certainly necessary to establish the limits of the measurement equipment's reliability..he shows that the equipment can be quite unreliable..
mtrycrafts said:
Whether he or the listeners understand the two mechanisms of localization is irrelevant. If the stimulus had differences large enough to cause differences, that would have been discovered by the listening test?
No. That is incorrect.

To be able to accurately measure an entity, one must first be aware that the entity exists..understand how the measurement equipment responds to the stimulus, how to present ONLY the stimulus for evaluation while leaving out confounding variables.

He, and his subjects, knew nothing about ITD/IID and localization. They do not know how to measure it...they do not know what levels make a difference, they do not know how to present either independently, or in conjunction, they do not know how to control it, and they do not understand what the time dependent sensitivity of the measuring equipment is..

Quite honestly, they know absolutely nothing about what they are trying to look for...did you expect them to succeed?? I didn't..

Consider my headphone experiment from coupla posts ago..what I described is beyond the state of the art in human localization research..it will be a while before what I am talking about is validated in the lab..even though it is so very easily repeatable and verifiable. Once I understood to look for a difference between ITD shifting and IID shifting based on frequency, it was easy enough to "see". One needs to know what to look for..


mtrycrafts said:
But I am still in my infancy on this

mee too... :eek:

mtrycrafts said:
But, I don't matter in the scheme of audio world. Just a insignificant ripple
That's ok...I am pond scum.. :p
mtrycrafts said:
How is your experimentation going? Any closer to the end of the tunnel? ;)
As you can see..the closer I get, the faster the end moves away..

To get to the end requires advancing the discipline of human hearing research..I was sickened when I came to the realization that current research is not yet at the levels I have worked through. I need to complete my ITD/IID differential mathematical analysis, and then work with a neural/medical collaborator to test humans for inherent capabilities..This path forces at least two to three years into the woiks, as peer reviewed research is not fast....it should not be fast, as that could lead to articles like that '85 skin debacle that just surfaced at cables..

Cheers, John
 
J

jneutron

Senior Audioholic
pikers said:
Incorrect. You have an issue with marketing strategies, but that's another thread.
I believe the issue is not marketing strategies, but more so the fabrications used by marketing to sell product.

pikers said:
Bias to me is what performs better..
Bias is actually not defined as performing better. It is the tendency towards a specific outcome based on criteria which are not part of the outcome..like a prettier cable sounding better, or a more expensive one sounding better..this is what Dunlavey showed to skew an outcome.

pikers said:
For you to assert otherwise is typical of your average scientist; stick to your theory until facts come wheeling in and run you over.
No. Actually, an average scientist knows the difference between bias and outcome...course, a below average scientist knows the difference.

If you wish to talk of theory, just look at Hawksford's terribly flawed theory..or the sandbag on wires theory, motor-generator theory, piezo theory, skin effect smearing theory, strand jumping theory, grain boundary theory...these are theories produced by the marketing guys, the pseudo-engineers and guru's, and they have no merit. none.

pikers said:
BTW - It's flawed if it doesn't work. Since it does, it's only flawed since it doesn't meet with your definition and desire to keep your head in the sand.
Hmmm..I am working to introduce a new paradigm to the audio community, this paradigm could result in proof positive that something such as speaker wires and line cords CAN affect what is heard(and, being testable, allows for the possibility of proof otherwise)..this is in direct contention to what is understood to be the present paradigm, that we don't hear it, but are easily swayed by bias..so, in essence, I am applying more advanced science to prove exactly what you are saying..

Mtry is applying exactly what is known as of today...proof has not been demonstrated, bias can sway the outcome, and current understandings of human hearing and test measurements do not allow for what you say...Dunlavey et. al. are applying what is known to the problem...no more, no less.. I will change what they know, it will take time, and it WILL be accomplished by scientists, the ones you disdain..

Hopefully, civil discourse will be met along the way..

Cheers, John
 
MacManNM

MacManNM

Banned
You guys are funny.

What if he’s right? What if he can hear a difference of 10 millivolts? What if we don’t fully understand the dynamics of human hearing? You all must admit that it is possible, although by current knowledge unlikely. The human senses are a wonderful and mysterious thing. When a person looses sight, does his hearing get better, or is his mind more focused? There are mysteries, which may never be solved; certainly I’m not sure that a few groups of people, tested in a controlled environment can accurately represent the extremes of humans’ ability to hear.
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
MacManNM said:
You guys are funny.

What if he’s right? What if he can hear a difference of 10 millivolts? What if we don’t fully understand the dynamics of human hearing? You all must admit that it is possible, although by current knowledge unlikely. The human senses are a wonderful and mysterious thing. When a person looses sight, does his hearing get better, or is his mind more focused? There are mysteries, which may never be solved; certainly I’m not sure that a few groups of people, tested in a controlled environment can accurately represent the extremes of humans’ ability to hear.

His claims are easy to test today, even without understanding the dynamics of hearing. That is irrelevant, really. To date, no one has differentiated comparable cables, period. Why would he be a super human? He is an unknown entity.

If he is so certain, I am sure someone will test him. Will he admit defeat afterwards? Or just make excuses as so many before him has.
 
MacManNM

MacManNM

Banned
mtrycrafts said:
His claims are easy to test today, even without understanding the dynamics of hearing. That is irrelevant, really. To date, no one has differentiated comparable cables, period. Why would he be a super human? He is an unknown entity.

If he is so certain, I am sure someone will test him. Will he admit defeat afterwards? Or just make excuses as so many before him has.

Maybe he has super hearing. Super human? Maybe. Who are you to say he can't hear a difference?
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
jneutron said:
Consider my headphone experiment from coupla posts ago..what I described is beyond the state of the art in human localization research..it will be a while before what I am talking about is validated in the lab..even though it is so very easily repeatable and verifiable. Once I understood to look for a difference between ITD shifting and IID shifting based on frequency, it was easy enough to "see". One needs to know what to look for..
Cheers, John

OK, you will be able to present a signal and alter the parameters if IIT and ITD to be detectable, with headphones. Can this be part of the music stimulus? Will it work with speakers, without headphones? Do components display such magnitude of differences to be detetcable?
 
N

Nick250

Audioholic Samurai
MacManNM said:
You guys are funny.

What if he’s right? What if he can hear a difference of 10 millivolts? QUOTE]

To me the question is, what if he hears a 10 millivolt difference when there isn't one? i.e, hearing something which is not there.
 
M

MDS

Audioholic Spartan
Nick250 said:
To me the question is, what if he hears a 10 millivolt difference when there isn't one? i.e, hearing something which is not there.
That is exactly the issue and why there must be a proper protocol to remove bias; ie DBT or A/B/X type testing. The brain and our senses are extremely complex and powerful...and yet easily fooled.
 
J

jneutron

Senior Audioholic
mtrycrafts said:
OK, you will be able to present a signal and alter the parameters if IIT and ITD to be detectable, with headphones. Can this be part of the music stimulus? Will it work with speakers, without headphones?
With headphones, it is easy to consider..this is because each ear receives exactly what is intended for that ear.

With a pair of speakers, the issue becomes more complex. This is because the left ear hears, delayed, what was intended for the right..and vice versa.

So, two speakers present three images. The right speaker creates it's own image, the left it's own, and the combo produces the center image. This is entirely different from how we receive an image from an actual object, where the spherical wavefront has directional vectors of propagation.

For a real object, we humans have learned how to tell where the object is..for a virtual one, we have to establish the "interpreter" internally to localize..the "interpreter" needs to ignore the side images..

mtrycrafts said:
Do components display such magnitude of differences to be detetcable?
The big question..

To localize to within one foot, coherence needs to be accurate to 5 uSec and .06 dB interchannel.. so the real question is: who tests to that level??

Answer: nobody, they do not know how to, they do not realize the need to..

As a sidebar: accuracy to one foot requires those ITD and IID levels..but what needs to be done is present the stimulus accurately in a lab enviro, to determine the actual sensitivity vs frequency for those parameters. It is certainly not 5 uSec and .06 dB across the entire spectrum, nor will each track..but that tracking is the most important thing of all..pan pots, btw, do not do that.

Cheers, John
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
jneutron said:
With headphones, it is easy to consider..this is because each ear receives exactly what is intended for that ear.

With a pair of speakers, the issue becomes more complex. This is because the left ear hears, delayed, what was intended for the right..and vice versa.

So, two speakers present three images. The right speaker creates it's own image, the left it's own, and the combo produces the center image. This is entirely different from how we receive an image from an actual object, where the spherical wavefront has directional vectors of propagation.

For a real object, we humans have learned how to tell where the object is..for a virtual one, we have to establish the "interpreter" internally to localize..the "interpreter" needs to ignore the side images..



The big question..

To localize to within one foot, coherence needs to be accurate to 5 uSec and .06 dB interchannel.. so the real question is: who tests to that level??

Answer: nobody, they do not know how to, they do not realize the need to..

As a sidebar: accuracy to one foot requires those ITD and IID levels..but what needs to be done is present the stimulus accurately in a lab enviro, to determine the actual sensitivity vs frequency for those parameters. It is certainly not 5 uSec and .06 dB across the entire spectrum, nor will each track..but that tracking is the most important thing of all..pan pots, btw, do not do that.

Cheers, John
I think, that is a biggie when I do that ;) , that with speakers, this is an impossibility as you are also talking about the space around you that throws the biggest monkey wrench into ones percpetion. You move a speaker, it changes. Move the listener, it changes, add to the space, it changes.

I am still struggling with this ;) But, I am dense. Let me know when you have answers :D
 
J

jneutron

Senior Audioholic
mtrycrafts said:
with speakers, this is an impossibility as you are also talking about the space around you that throws the biggest monkey wrench into ones percpetion. You move a speaker, it changes. Move the listener, it changes, add to the space, it changes.
Yup..totally agree, the first thing is the speakers and the room..

BTW, direct-reflecting technology uses the room, time, and direction, to get around a lot of the problem..by diffusing the sources, we seem to hear a better soundstage..not as focussed, but quite a few people like the sound.

What I am working on is the next step...presentation of a CORRECT information stream to the speakers..

I cringe whenever I think of a pan pot..eventually, both ITD and IID will be available as tools for the recording people for mixdown..it will also require standardization of speaker placement..
mtrycrafts said:
I am still struggling with this ;)
Mee too...

mtrycrafts said:
But, I am dense.
Well, at least I'm not alone :p
mtrycrafts said:
Let me know when you have answers :D
I always try to keep everyone abreast..


Cheers, John
 
newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top