Objective testing of speaker wire? Does it exist?

highfigh

highfigh

Seriously, I have no life.
Your definition of CI is worded better than mine. It's what I had in mind, but didn't say as well.
I've had lot's of experience in lab biochemistry and molecular biology, but very little experience in blind listening tests. Just once I participated in one where about 40 DIY speaker builders listened, while blind, to speakers with crossovers made with either cheap non-polar electrolytic capacitors or expensive film-type capacitors. No one in the group could reliably identify them by listening. Overall, the group's average was 50% right answers and 50% wrong, no different from random guessing. The conclusion was that people could not hear differences in sound due to different crossover capacitors.

Importantly, no controls were done in that test. Dennis Murphy, one of the organizers of the test, told me at the time that doing controls like that would consume a lot of time & effort. Time & effort that he preferred to spend on testing different capacitors.

A raging debate soon broke out, challenging the test results. All the usual arguments were invoked about how the test couldn't have been sensitive enough to detect the subtle, unmeasurable, differences made by different capacitors. A simple negative control could have argued strongly against those non-believers' points. Not long afterwards, Dennis admitted to me that he wished he had done that negative control.

Including appropriate negative and positive controls with experiments was an important part of my scientific education, in grad school and beyond. Every one had to learn the hard way that publishing anything required good controls. And figuring out what they should be was the hardest part of doing science.
Having worked in the consumer audio business for over 40 years, I still have to wonder about the reasons for choosing equipment. At some point, wasn't it about listening to music and other source material? What happened to that, as the main goal?

Trying to find the absolute best equipment and accessories is a good way to go broke and end up extremely frustrated unless they happen to find things that work well together and their room is fairly acoustically neutral, by design or coincidence.

I don't have a problem with searching for 'better', but 'best' is very similar to 'perfect', which is impossible. 'Best' means that nothing is better than the item in question.
 
highfigh

highfigh

Seriously, I have no life.
With my 16Ga, 2 cond, copper stranded, twisted pair, shielded and jacketed, USA made wire and not over 30 foot lengths, my speakers provide music to my ears. Is that good speaker wire testing??? Do I need to bump it up to 14 or 10ga, for better hearing/clarity performance from my speakers?? I don't know?? Is it important in my case??
Speaker wire is rarely shielded.
 
Swerd

Swerd

Audioholic Warlord
With my 16Ga, 2 cond, copper stranded, twisted pair, shielded and jacketed, USA made wire and not over 30 foot lengths, my speakers provide music to my ears.
Good. I'm not at all surprised to hear that. It's what science calls an anecdotal observation – one person's experience. Our question above is how could someone go about disproving the claims that expensive or exotic speaker wires make a difference in sound quality. Many, many anecdotal observations could never accomplish that.
Is that good speaker wire testing??? Do I need to bump it up to 14 or 10ga, for better hearing/clarity performance from my speakers?? I don't know?? Is it important in my case??
I doubt if you'll hear any difference with thicker gauge wires.

In the future, if you ever replace those 16 ga wires with 14 or 12 ga, you can try to see if you hear any change. But before you compare the old 16 ga wire with the new thicker wire, be sure to cut off the old wire's terminations and make new terminals with un-oxidized wire, like you would have with the new thicker gauge wire.
 
Last edited:
highfigh

highfigh

Seriously, I have no life.
YeH that was me. I thought gene did the hanger and roger Russell did similar tests with actual wire, and also someone used tin foil. Now I just can’t remember...I’ve read too many words in my life and now they’re running together lol.
I have read posts and watched videos of people who braided Cat5e and claimed it was transformational for their systems. I'll never get that time back.
 
slipperybidness

slipperybidness

Audioholic Warlord
I have read posts and watched videos of people who braided Cat5e and claimed it was transformational for their systems. I'll never get that time back.
How about these magical USB cables!

Clearly the people that believe in that don't really understand the USB protocol and signal buffering o_Oo_Oo_O
 
Swerd

Swerd

Audioholic Warlord
Michael (Idiot) Fremer; If the difference can't be heard in DBT, then there must be something wrong with the DBT as a means of evaluating audio equipment.
My response to Fremer or any other idiot would be, if you can't measure a difference in sound quality, perhaps it isn't really there.
 
BoredSysAdmin

BoredSysAdmin

Audioholic Slumlord
With my 16Ga, 2 cond, copper stranded, twisted pair, shielded and jacketed, USA made wire and not over 30 foot lengths, my speakers provide music to my ears. Is that good speaker wire testing??? Do I need to bump it up to 14 or 10ga, for better hearing/clarity performance from my speakers?? I don't know?? Is it important in my case??
I can't believe we went 2 pages in yet another speakers wire thread and no one would bring Roger Russell excellent write-up:
@Teetertotter? for you specifically relevant table is here: http://www.roger-russell.com/wire/wire.htm#wiretable
 
Will Brink

Will Brink

Audioholic
OK, I tried listening to that YT video. It was too long. By minute 12 I gave up, as they still didn't make their point. But it was long enough to have a couple of simple observations:

They use two speakers in stereo. I would simplify things and use only one speaker in mono, without a subwoofer.

They made the mistake of trying to do too much by asking listeners which cable they preferred. Instead, they should have asked only if they could hear a difference. Avoid preferences, a subjective matter. Design the test so all answers are a simple YES or NO.

They did recognize that they didn’t test enough people. Apparently, someone told them they needed to test 100 people to achieve statistical significance. Maybe. A test of roughly 30 to 100 people can really only answer the very simple YES or NO question “Is it worthwhile to test a larger group, 300 to 1000 people?”

The question becomes, how many people must be tested to provide an overall YES answer with a ≥95% Confidence Interval (95% CI). If the individual YES answers are close to the NO answers, such as 55% vs. 45%, more people are needed to achieve that 95% CI. If the difference between YES and NO is large, you can get away with fewer people. To get this right, one needs to really know statistics, or have a friend who does. (By confidence interval, I mean how many individuals must be sampled from all humans to provide an answer that we are at least 95% confident is not due to statistical sampling errors.) If I didn't warn you that statistics is boring, I'm telling you now.

The purpose of this listening test is to demonstrate whether listeners can actually hear differences in a speaker's sound due to different speaker cables. Statistics, properly used, is essential for this. So are properly designed controls.

Experimental Controls
This listening test is really an experimental effort to measure something. It’s not an established test. How good is the test? How sensitive is it? What is the background noise level? How reliable is the test? In science experiments, these questions must always be answered at the same time as the main experiment is done. They're answered with the experimental controls. With good controls, strong conclusions can be made. With so-so controls, some conclusions can be made, but they’ll be weaker. Without controls, the main experiment is essentially meaningless. When all the audio subjectivists start attacking you because they don’t believe your results, you'll wish you had done better controls.

Negative Controls
Each listener must be tested when two identical cables are compared. In theory, they should all answer “NO, they’re not different”. But you cannot assume people can always tell when cables are identical. You can think of a Negative Control as a measure how many “false positive” answers there are. You must measure the amount of these false positive answers that each listener reports. The false positive percentage for each listener must be subtracted from the percentage of YES or NO answers that listener provides when there really were two different cables being tested. If all listeners have a low average false positive rate, you would be in good position to make useful conclusions about the cable comparison results. If that average false positive rate is high, 50% for example, you could only conclude that listeners could not reliably report what they heard. No useful conclusions about the cable comparisons could be made.

This isn't about dishonesty. It's that no one is infallible. You can think of the Negative Control as a measure of the background noise in the test. It won’t be zero. Measure it and find out what it is.

Positive Controls
Each listener must be tested when different levels of noise, such as pink or white noise, is added on top of the sounds or brief music passage used during a listening test. These tests are completely independent of speaker cables, but they are needed, as I hope to show you. Imagine a series of listening tests where each listener hears 0% added noise vs. a series, such as 0.3%, 1%, 3%, 10%, or more. Imagine a graph of these results, where the X axis shows added noise from 0% to 10%, and the Y axis shows how many listeners could reliably answer "YES, I can hear it". With such a standard curve, you could measure what level of added noise could be heard by 50% of the listeners. This might be useful as a comparison to what percentage of listeners could hear differences, if any, between speaker cables.

The positive controls measure what genuine levels of noise or distortion each listener actually can reliably detect. But I’m just guessing what these numbers should be, for discussion’s sake. What range of added noise actually works will have to be determined ahead of time, before doing the real cable comparison listening test.

Once you have that result, you can compare the Positive Control results to the cable comparison results. The goal is to be able to say something like this: “Under test conditions, where 50% of the listeners could reliably hear 2.3% added noise (a made-up value), 35% of listeners (another made-up number) could detect a difference between speaker cables. It helps put things into context. If other people try to repeat this test, the Positive Controls can tell you if their test was more, less, or similarly sensitive.

As you can imagine, good Positive Controls aren’t as easy to come up with or perform as Negative Controls. But you must run appropriate Positive Controls to have an idea how sensitive the whole cable comparison test actually is.

Again, this isn’t about dishonesty. The sounds due to different cables – if they exist at all – might be very subtle. No one can always hear differences no matter how subtle they might be. Measure it and find out.

If you imagine how many individual listening tests each listener must endure, you'll have an idea how long this will take. It won’t be easy. And, all this testing is much more interesting when there is likely to be a real difference. Listening to different speaker cables won’t provide that kind of excitement. Unless you think listening to paint dry is exciting.
Good info, thanx. Took epi and bio stats and have been published, so have a reasonable understanding of the stats involved (but far from a stats expert to be sure) and per prior comments, clearly not an easy thing to test accurately, but interested in what a valid protocol would look like. Older paper but interesting:

Audio Analysis VI: Testing Audio Cables

On the kids vid, I commend them for the attempt and at least they were clear it was far from a scientifically valid result, some interesting results.

Unlear where they came up with 100 people to get test for reliable and robust stat sig, but I'm no stats guy.

 
Will Brink

Will Brink

Audioholic
If someone has something to sell, they will never STFU about it.
It's far worse then that. Many (most?) audiophiles think cables and wires impact the sound, and as the man said:

“No one ever went broke underestimating the intelligence of the American public.” -
H.L. Mencken

It's great forum and page don't push voodoo and audio BS.
 
Will Brink

Will Brink

Audioholic
My response to Fremer or any other idiot would be, if you can't measure a difference in sound quality, perhaps it isn't really there.
That causes cognitive dissonance and thus will be rejected.
 
Swerd

Swerd

Audioholic Warlord
If any of you have trouble getting your head around what I mean by a positive control, here's a good example. Once, at a DIY speaker meeting, a guy proudly presented his 2-way speakers that were mainly of his own design. The drivers were very expensive, a SEAS Excel 6½" mid-woofer and SEAS Excel Milenium tweeter. He also told us, proudly, how he used only teflon-insulated silver wires, silver solder, very expensive capacitors, and expensive flat-wire inductor coils within the cabinets. He assured us that all those materials significantly improved the sound of the speaker.

When he demo'ed the speakers, someone (Dennis Murphy) politely suggested that he might have mistakenly wired the tweeter with polarity reversed from the intended design. The builder doubted that. Later that same day, the speakers were measured acoustically, and they were indeed mistakenly wired. Those acoustic measurements looked similar to this other 2-way design:

Wired with the correct polarity
MTM Freq Resp on axis.gif


And with the tweeter with reversed polarity
MTM Reverse Null.gif


The difference looks pretty large on these graphs, but that deep & wide trough centered at roughly 2.5 kHz, was surprisingly easy to overlook. The take home lesson: If the builder could not hear the difference between the correct and incorrect wiring polarities, what can we make of his claim about the improved sound qualities of the silver wires and other expensive crossover components? They must have been much larger than the suckout at 2.5 kHz. That is a positive control – in this case, an unintended positive control. The guy's understandable late night mistake undermined his claims about silver wires and exotic crossover components.
 
Last edited:
Swerd

Swerd

Audioholic Warlord
Good info, thanx. Took epi and bio stats and have been published, so have a reasonable understanding of the stats involved (but far from a stats expert to be sure) and per prior comments, clearly not an easy thing to test accurately, but interested in what a valid protocol would look like.
If you are still near that school (Public Health, right?) try to find a stats guy who is familiar with clinical trial design. The problem of how many people are needed to test an experimental drug is similar to that of an audio listening test. Ask him this, if you were designing a randomized placebo-controlled Phase 2 clinical trial of a new drug to treat disease X, how many patients would be needed, and how many individual positive responses could be considered an overall positive outcome? In the case of disease X, existing standard treatments can produce a positive response rate of 20%.
 
Will Brink

Will Brink

Audioholic
If you are still near that school (Public Health, right?) try to find a stats guy who is familiar with clinical trial design. The problem of how many people are needed to test an experimental drug is similar to that of an audio listening test. Ask him this, if you were designing a randomized placebo-controlled Phase 2 clinical trial of a new drug to treat disease X, how many patients would be needed, and how many positive responses could be considered a positive outcome? In the case of disease X, standard treatments can produce a positive response rate of 20%.
Natural sciences with pre med focus. I work with/know a ton of people involved in clinical trial design. Been involved in a few myself, but always have the bio stats person let us/me know what n numbers required, etc to get the stat sig required for validity. One of the big problems in doing such an audio test is it requires input and knowledge from a wide array of people from different sci/med backgrounds, and audio engineers.
 
Swerd

Swerd

Audioholic Warlord
Natural sciences with pre med focus. I work with/know a ton of people involved in clinical trial design. Been involved in a few myself, but always have the bio stats person let us/me know what n numbers required, etc to get the stat sig required for validity. One of the big problems in doing such an audio test is it requires input and knowledge from a wide array of people from different sci/med backgrounds, and audio engineers.
For the last 20 years before retiring in 2017, I worked at the National Cancer Inst., in a group called the Cancer Therapy Evaluation Program (CTEP). We sponsored clinical trials of a wide variety of experimental cancer drugs.

Every proposal for a clinical trial we received was sent to a biostatistics group for their input. I quickly learned to rely on them, and that many of the proposals we received had bogus statistical designs. Few MDs, even the well-known clinical researchers really understood statistics well enough to write their own statistical analyses. The really good ones relied on their own clinical trial statisticians.
 
slipperybidness

slipperybidness

Audioholic Warlord
For the last 20 years before retiring in 2017, I worked at the National Cancer Inst., in a group called the Cancer Therapy Evaluation Program (CTEP). We sponsored clinical trials of a wide variety of experimental cancer drugs.

Every proposal for a clinical trial we received was sent to a biostatistics group for their input. I quickly learned to rely on them, and that many of the proposals we received had bogus statistical designs. Few MDs, even the well-known clinical researchers really understood statistics well enough to write their own statistical analyses. The really good ones relied on their own clinical trial statisticians.
Personally, Stat was a HUGE HOLE in my formal education!

The only real statistics I got was tied into Quantum Chemistry.

Now that I work in manufacturing, Statistical Process Control is the name of the game, and I had some catching up to do!
 
mazersteven

mazersteven

Audioholic Warlord
I don't have a problem with searching for 'better', but 'best' is very similar to 'perfect', which is impossible. 'Best' means that nothing is better than the item in question.
Now you've done it. AcuDefTechGuy is going to tell us all his Yamaha cable is the Best. o_O
 
Will Brink

Will Brink

Audioholic
For the last 20 years before retiring in 2017, I worked at the National Cancer Inst., in a group called the Cancer Therapy Evaluation Program (CTEP). We sponsored clinical trials of a wide variety of experimental cancer drugs.

Every proposal for a clinical trial we received was sent to a biostatistics group for their input. I quickly learned to rely on them, and that many of the proposals we received had bogus statistical designs. Few MDs, even the well-known clinical researchers really understood statistics well enough to write their own statistical analyses. The really good ones relied on their own clinical trial statisticians.
Exactly. Stats are a totally different animal and skill set and far too many ignore it and think they understand it, and while I survived bio stats and epi, what was clear was it would never be an area of expertise for me. Unfortunitluy, as I suspect you're well aware, even among the MDs and related clinical researchers, some dunning kruger effect and ego creeps in, and that's often where they get themselves into trouble. I have referreed in a journal, and always recommend having a bio statistician take a hard look at their numbers.

There does appear to be some lit on the topic, more than I realized:

Ten years of A/B/X Testing
 
Last edited:

Latest posts

newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top