Objective testing of speaker wire? Does it exist?

Will Brink

Will Brink

Audioholic
Not a thread to start a debate, but discuss objective testing, more like lack there of. Being Audioholics, I suspect few are under the impression expensive wire/cables/power cords have any effects on the sound, but I digress:

If you were to run a proper objective blind A/B listening test that statistical significance could be detected and measured, what how would you set that up? Has no one developed an accepted protocol? It's clearly not as simple or easy to do objectively test as it seems on the surface, amazed that decades of this discussion has not produced a protocol. Maybe we can develop such a protocol? There's been some commendable attempts but does not seem like it's been tested under controlled conditions. I realize that's easier said than done. I am surprised, to the best of my knowledge has not been done. What would the protocol be to test for statistical significance?

A study starts with a basic hypothesis

Hypothesis: human beings will be unable to distinguish between speaker wire if they're not able to see which wire is being used under controlled conditions. We aim to test that hypothesis by controlling the known confounding variables and remove bias via blinded randomized A/B testing. Or hypothesis is that volunteers under such controlled conditions will be unable to differentiate between wires at a greater rate than random chance dictates.

I thought this was a fun recent attempt by their own admission, not conclusive or scientifically valid:

 
Swerd

Swerd

Audioholic Warlord
It is widely agreed that different speakers do sound different. Even the same speakers will sound different if moved to different locations within a room.

Although it's widely debated whether different amplifiers impart different sounds to the same speakers, it is commonly accepted around here that different solid state amplifiers, if they are well-designed, operate well below clipping, do not misbehave, and have a high damping factor across the band, will tend to sound the same. There are a few who claim otherwise, but they never discuss randomized blind listening tests to settle the question.

No one here debates whether speaker cables can impart different sounds. It would be like wondering whether you can hear paint dry, and if different brands of paint sound different. We don't go there… and you shouldn't either.
 
Last edited:
Kvn_Walker

Kvn_Walker

Audioholic Field Marshall
You test speaker cables objectively with a multimeter. Any method that involves your ears is subjective.
 
Will Brink

Will Brink

Audioholic
It is widely agreed that different speakers do sound different. Even the same speakers will sound different if moved to different locations within a room.

Although it's widely debated whether different amplifiers impart different sounds to the same speakers, around here it is commonly accepted around here that different solid state amplifiers, if they are well-designed, operate well below clipping, do not misbehave, and have a high damping factor across the band, will tend to sound the same. There are a few who claim otherwise, but they never discuss randomized blind listening tests to settle the question.

No one here debates whether speaker cables can impart different sounds. It would be like wondering whether you can hear paint dry, and if different brands of paint sound different. We don't go there… and you shouldn't either.
They may not here, but they do endlessly elsewhere and I'd like to actually develop a protocol to test it to see if I/we can get them to STFU about it some day. I also simply find it an intellectual/science discussion on how/if it can indeed be tested per my OP.
 
S

shadyJ

Speaker of the House
Staff member
Arranging a rigorous blind test is not an easy thing to do. It's a nice idea, but there are more interesting subjects for investigation than the audible differences between cables. The reason why is that the outcome isn't likely to deviate from any rational person's expectations. I would be more interested in a blind test where the results are likely to be more informative.
 
William Lemmerhirt

William Lemmerhirt

Audioholic Overlord
This is easy. Read about genes coat hanger wire test. Hopefully link forthcoming...
 
Will Brink

Will Brink

Audioholic
Arranging a rigorous blind test is not an easy thing to do. It's a nice idea, but there are more interesting subjects for investigation than the audible differences between cables. The reason why is that the outcome isn't likely to deviate from any rational person's expectations. I would be more interested in a blind test where the results are likely to be more informative.
Per OP, I'm awares... still a topic of interest, at least to me. ;)
 
Will Brink

Will Brink

Audioholic
This is easy. Read about genes coat hanger wire test. Hopefully link forthcoming...
I have seen the coat hanger test, didn't know that was Gene's doing. As I said, there's been some commendable attempts, none that one could apply stastical methods to to test for Stat sig as far as I'm aware. I thought the kids in the vid I posted also did a commendable job of it.
 
Swerd

Swerd

Audioholic Warlord
They may not here, but they do endlessly elsewhere and I'd like to actually develop a protocol to test it to see if I/we can get them to STFU about it some day. I also simply find it an intellectual/science discussion on how/if it can indeed be tested per my OP.
OK, I tried listening to that YT video. It was too long. By minute 12, they still hadn't made their point, and I gave up. But I heard enough to have a couple of simple observations:

They used two speakers in stereo. I would simplify things and use only one speaker in mono, without a subwoofer.

They made the mistake of asking too much of listeners: which cable did they prefer? Instead, they should have asked only if they could hear a difference. Avoid preferences, a subjective matter. Design the test so all answers are a simple YES or NO.

They did recognize that they didn’t test enough people. Apparently, someone told them they needed to test 100 people to achieve statistical significance. Maybe and maybe not. A test of roughly 30 to 100 people can really only answer the very simple YES or NO question “Is it worthwhile to test a larger group, 300 to 1000 people?”

The question becomes, how many people must be tested to provide an overall YES answer with a ≥95% Confidence Interval (95% CI). If the individual YES answers are close to the NO answers, such as 55% vs. 45%, more people are needed to achieve that 95% CI. If the difference between YES and NO is large, you can get away with fewer people. To get this right, one needs to really know statistics, or have a friend who does. (By confidence interval, I mean how many individuals must be sampled from all humans to provide an answer that we are at least 95% confident is not due to statistical sampling errors.) If I didn't warn you that statistics is boring, I'm telling you now.

The purpose of this listening test is to demonstrate whether listeners can actually hear differences in a speaker's sound due to different speaker cables. Statistics, properly used, is essential for this. So are properly designed controls.

Experimental Controls
This listening test is really an experimental effort to measure something. It’s not an established test. How good is the test? How sensitive is it? What is the background noise level? How reliable is the test? In science experiments, these questions must always be answered at the same time as the main experiment is done. They're answered with the experimental controls. With good controls, strong conclusions can be made. With so-so controls, some conclusions can be made, but they’ll be weaker. Without controls, the main experiment is essentially meaningless. When all the audio subjectivists start attacking you because they don’t believe your results, you'll wish you had done better controls.

Negative Controls
Each listener must be tested when two identical cables are compared. In theory, they should all answer “NO, they’re not different”. But you cannot assume people can always tell when cables are identical. You can think of a Negative Control as a measure how many “false positive” answers there are. You must measure the amount of these false positive answers that each listener reports. The false positive percentage for each listener must be subtracted from the percentage of YES or NO answers that listener provides when there really were two different cables being tested. If all listeners have a low average false positive rate, you would be in good position to make useful conclusions about the cable comparison results. If that average false positive rate is high, 50% for example, you could only conclude that listeners could not reliably report what they heard. No useful conclusions about the cable comparisons could be made.

This isn't about dishonesty. It's that no one is infallible. You can think of the Negative Control as a measure of the background noise in the test. It won’t be zero. Measure it and find out what it is.

Positive Controls
Each listener must be tested when different levels of pink or white noise is added on top of the sounds or brief music passage used during a listening test. These tests are completely independent of speaker cables, but they are needed as comparisons to the speaker cable tests. Imagine a series of listening tests where each listener hears 0% added noise vs. a series, such as 0.3%, 1%, 3%, 10%, or more. Imagine a graph of these results, where the X axis shows added noise from 0% to 10%, and the Y axis shows how many listeners could reliably answer "YES, I can hear it". With such a standard curve, you could measure what level of added noise could be heard by 50% of the listeners. This might be useful as a comparison to what percentage of listeners could hear differences, if any, between speaker cables.

The positive controls measure what genuine levels of noise or distortion each listener actually can reliably detect. But I’m just guessing what these noise levels should be, for discussion’s sake. What range of added noise actually works will have to be determined ahead of time, before doing the real cable comparison listening test.

Once you have that result, you can compare the Positive Control results to the cable comparison results. The goal is to be able to say something like this: “Under test conditions, where 50% of the listeners could reliably hear 2.3% added noise (a made-up value), 35% of listeners (another made-up number) could detect a difference between speaker cables. It helps put things into context. If other people try to repeat this test, the Positive Controls can tell you if their test was more, less, or similarly sensitive.

As you can imagine, good Positive Controls aren’t as easy to come up with or perform as Negative Controls. But you must run appropriate Positive Controls to have an idea how sensitive the whole cable comparison test actually is.

Again, this isn’t about dishonesty. The sounds due to different cables – if they exist at all – might be very subtle. No one can always hear differences no matter how subtle they might be. Measure it and find out.

If you imagine how many individual listening tests each listener must endure, you'll have an idea how long this will take. It won’t be easy. And, all this testing is much more interesting when there is likely to be a real difference. Listening to different speaker cables won’t provide that kind of excitement. Unless you think listening to paint dry is exciting.
 
Last edited:
killdozzer

killdozzer

Audioholic Samurai
@Will Brink one more very important thing, if this is truly your goal "I/we can get them to STFU about it some day ", then it's a futile endeavor. Tests don't successfully dispute one's want to believe. If the want is really there, then one way or the other will be found to discredit your test.

Michael (Idiot) Fremer; If the difference can't be heard in DBT, then there must be something wrong with the DBT as a means of evaluating audio equipment.
 
highfigh

highfigh

Seriously, I have no life.
They may not here, but they do endlessly elsewhere and I'd like to actually develop a protocol to test it to see if I/we can get them to STFU about it some day. I also simply find it an intellectual/science discussion on how/if it can indeed be tested per my OP.
If someone has something to sell, they will never STFU about it.
 
highfigh

highfigh

Seriously, I have no life.
I have seen the coat hanger test, didn't know that was Gene's doing. As I said, there's been some commendable attempts, none that one could apply stastical methods to to test for Stat sig as far as I'm aware. I thought the kids in the vid I posted also did a commendable job of it.
The coat hanger test was Roger Russell's, not Gene's.
 
slipperybidness

slipperybidness

Audioholic Warlord
OK, I tried listening to that YT video. It was too long. By minute 12 I gave up, as they still didn't make their point. But it was long enough to have a couple of simple observations:

They use two speakers in stereo. I would simplify things and use only one speaker in mono, without a subwoofer.

They made the mistake of trying to do too much by asking listeners which cable they preferred. Instead, they should have asked only if they could hear a difference. Avoid preferences, a subjective matter. Design the test so all answers are a simple YES or NO.

They did recognize that they didn’t test enough people. Apparently, someone told them they needed to test 100 people to achieve statistical significance. Maybe. A test of roughly 30 to 100 people can really only answer the very simple YES or NO question “Is it worthwhile to test a larger group, 300 to 1000 people?”

The question becomes, how many people must be tested to provide an overall YES answer with a ≥95% Confidence Interval (95% CI). If the individual YES answers are close to the NO answers, such as 55% vs. 45%, more people are needed to achieve that 95% CI. If the difference between YES and NO is large, you can get away with fewer people. To get this right, one needs to really know statistics, or have a friend who does. (By confidence interval, I mean how many individuals must be sampled from all humans to provide an answer that we are at least 95% confident is not due to statistical sampling errors.) If I didn't warn you that statistics is boring, I'm telling you now.

The purpose of this listening test is to demonstrate whether listeners can actually hear differences in a speaker's sound due to different speaker cables. Statistics, properly used, is essential for this. So are properly designed controls.

Experimental Controls
This listening test is really an experimental effort to measure something. It’s not an established test. How good is the test? How sensitive is it? What is the background noise level? How reliable is the test? In science experiments, these questions must always be answered at the same time as the main experiment is done. They're answered with the experimental controls. With good controls, strong conclusions can be made. With so-so controls, some conclusions can be made, but they’ll be weaker. Without controls, the main experiment is essentially meaningless. When all the audio subjectivists start attacking you because they don’t believe your results, you'll wish you had done better controls.

Negative Controls
Each listener must be tested when two identical cables are compared. In theory, they should all answer “NO, they’re not different”. But you cannot assume people can always tell when cables are identical. You can think of a Negative Control as a measure how many “false positive” answers there are. You must measure the amount of these false positive answers that each listener reports. The false positive percentage for each listener must be subtracted from the percentage of YES or NO answers that listener provides when there really were two different cables being tested. If all listeners have a low average false positive rate, you would be in good position to make useful conclusions about the cable comparison results. If that average false positive rate is high, 50% for example, you could only conclude that listeners could not reliably report what they heard. No useful conclusions about the cable comparisons could be made.

This isn't about dishonesty. It's that no one is infallible. You can think of the Negative Control as a measure of the background noise in the test. It won’t be zero. Measure it and find out what it is.

Positive Controls
Each listener must be tested when different levels of noise, such as pink or white noise, is added on top of the sounds or brief music passage used during a listening test. These tests are completely independent of speaker cables, but they are needed, as I hope to show you. Imagine a series of listening tests where each listener hears 0% added noise vs. a series, such as 0.3%, 1%, 3%, 10%, or more. Imagine a graph of these results, where the X axis shows added noise from 0% to 10%, and the Y axis shows how many listeners could reliably answer "YES, I can hear it". With such a standard curve, you could measure what level of added noise could be heard by 50% of the listeners. This might be useful as a comparison to what percentage of listeners could hear differences, if any, between speaker cables.

The positive controls measure what genuine levels of noise or distortion each listener actually can reliably detect. But I’m just guessing what these numbers should be, for discussion’s sake. What range of added noise actually works will have to be determined ahead of time, before doing the real cable comparison listening test.

Once you have that result, you can compare the Positive Control results to the cable comparison results. The goal is to be able to say something like this: “Under test conditions, where 50% of the listeners could reliably hear 2.3% added noise (a made-up value), 35% of listeners (another made-up number) could detect a difference between speaker cables. It helps put things into context. If other people try to repeat this test, the Positive Controls can tell you if their test was more, less, or similarly sensitive.

As you can imagine, good Positive Controls aren’t as easy to come up with or perform as Negative Controls. But you must run appropriate Positive Controls to have an idea how sensitive the whole cable comparison test actually is.

Again, this isn’t about dishonesty. The sounds due to different cables – if they exist at all – might be very subtle. No one can always hear differences no matter how subtle they might be. Measure it and find out.

If you imagine how many individual listening tests each listener must endure, you'll have an idea how long this will take. It won’t be easy. And, all this testing is much more interesting when there is likely to be a real difference. Listening to different speaker cables won’t provide that kind of excitement. Unless you think listening to paint dry is exciting.
I just wanted to add a little on my interpretation of the 95% Confidence Interval. My education/understanding is that a "95% confidence interval" means that we have 95% Confidence that the Sample Set is an accurate representation of the population as a whole.

Beyond that, you clearly have quite a bit of experience in such matters! I think we see your next thesis :p
 
Swerd

Swerd

Audioholic Warlord
You test speaker cables objectively with a multimeter. Any method that involves your ears is subjective.
Using a multimeter, or any other electronic lab bench measurement, only measures physical parameters. You cannot measure what people hear with a multimeter. Only listening tests can do that. Floyd Toole and Sean Olive spent much of their careers establishing how to do that with proper scientific controls that minimize or eliminate the subjectivity.
 
Swerd

Swerd

Audioholic Warlord
I just wanted to add a little on my interpretation of the 95% Confidence Interval. My education/understanding is that a "95% confidence interval" means that we have 95% Confidence that the Sample Set is an accurate representation of the population as a whole.
Your definition of CI is worded better than mine. It's what I had in mind, but didn't say as well.
Beyond that, you clearly have quite a bit of experience in such matters! I think we see your next thesis :p
I've had lot's of experience in lab biochemistry and molecular biology, but very little experience in blind listening tests. Just once I participated in one where about 40 DIY speaker builders listened, while blind, to speakers with crossovers made with either cheap non-polar electrolytic capacitors or expensive film-type capacitors. No one in the group could reliably identify them by listening. Overall, the group's average was 50% right answers and 50% wrong, no different from random guessing. The easy conclusion was that people could not hear differences in sound due to different crossover capacitors.

Importantly, no controls were done in that test. Dennis Murphy, one of the organizers of the test, told me at the time that doing controls like that would consume too much time & effort. Time & effort that he preferred to spend on testing different capacitors.

A raging debate soon broke out, challenging the test results. All the usual arguments were invoked about how the test couldn't have been sensitive enough to detect the subtle, difficult to measure, differences made by different capacitors. A simple negative control could have argued strongly against those non-believers' points. Not long afterwards, Dennis admitted to me that he wished he had done that negative control.

Including appropriate negative and positive controls with experiments was an important part of my scientific education, in grad school and beyond. Every one had to learn the hard way that publishing anything required good controls. And figuring out what they should be was the hardest part of doing science.
 
Last edited:
Teetertotter?

Teetertotter?

Senior Audioholic
With my 16Ga, 2 cond, copper stranded, twisted pair, shielded and jacketed, USA made wire and not over 30 foot lengths, my speakers provide music to my ears. Is that good speaker wire testing??? Do I need to bump it up to 14 or 10ga, for better hearing/clarity performance from my speakers?? I don't know?? Is it important in my case??
 
newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top