Loudspeaker Myths: Separating the Scientific Facts from Science Fiction

A

admin

Audioholics Robot
Staff member
Just like we’ve found with speaker cables and audio interconnects, snake oil and gimmicks are also alive and well in the world of loudspeakers. With over 400 loudspeaker brands in the consumer market competing for a very small piece of action, it’s no wonder manufacturers feel the need to differentiate themselves and often use Ivory Tower tactics and pseudoscience to proclaim product superiority. In fact some of the loudspeaker science often reads less believable than science fiction. In this article and one on one interview between Gene and Hugo, we break down and discuss some of the common nonsense we’ve found surrounding consumer loudspeaker products.



Discuss our Loudspeaker Myths article here

Please share your favorite myths that need a good debunking.

Also check out our interview video that covers the Loudspeaker myths and also goes into some good insider info:

 
Last edited:
3db

3db

Audioholic Slumlord
"Editorial Note About the NRC Measurements
The listening window is a combination of 0, +/- 15 vertical, and +/- 15 and 30 horizontal (the NRC measurements used 15 deg increments, Harman uses 10 deg). This describes the average direct sound arriving at a group of listeners. It does not include the first reflections.

Sound power is a weighted average of all the curves, leading to an estimate of the total sound energy radiated over a 360 deg sphere. "

Wouldn't the first reflection points be considered as part of the room's response to the speaker's stimuli thus measuring the room's interaction rather than the loudspeaker itself?




 
TheWarrior

TheWarrior

Audioholic Ninja
^ The mic would be positioned too close to the driver itself to be influenced by the room in any way. Or you would certainly position the speaker (anechoic or not) away from a wall as this test is intended to accurately measure the drivers response, not the room's response.

And Gene, I honestly think that your writing has reached a new height this year. Its a helluva task to combine facts, opinion, reason, and science in an enjoyable yet educational way. Keep it up!
 
3db

3db

Audioholic Slumlord
^ The mic would be positioned too close to the driver itself to be influenced by the room in any way. Or you would certainly position the speaker (anechoic or not) away from a wall as this test is intended to accurately measure the drivers response, not the room's response.
Then why the comment of the first reflection points?
 
TheWarrior

TheWarrior

Audioholic Ninja
Because the first reflection point is unique to every room. Hence why you would measure the drivers individually in an anechoic environment.
 
U

utopianemo

Junior Audioholic
DBTs

In my opinion, Double-Blind tests are to test the reviewer, not the speakers. People who claim that is the only way to do speaker testing are almost always doing so because they don't believe reviewers can tell the difference between products and they want them to prove themselves.

I read speaker reviews with both eyes open and always try to take into account who it is that's writing the review. If I know the reviewer has a certain tendency(eg they only listen at moderate volumes, or they prefer a bright, punchy sound, or imaging is a top priority), I'll try to assimilate that knowledge with their review.

Knowing a reviewer's preferences and style speaks more to me about their proficiency than any DBT or ABX test.
 
Swerd

Swerd

Audioholic Warlord
What a good article! It covers every aspect that I know of concerning the Myth vs. Science debate.

Well done.

Most manufacturers now recognize the importance of minimizing diffraction by better integrating the drivers and crossover networks and narrowing the baffle area. It is rare today to find a legitimately high fidelity consumer loudspeaker in a big wide 1970’s style speaker cabinet with tweeters firing off opposite ends of the baffle.
The other result of making speaker baffles narrower, is that it moved the baffle step frequencies closer to the midrange, where most listeners are most sensitive to changes. It's true that most manufacturers recognize the importance of minimizing diffraction, but fewer of them recognize the need to compensate or equalize the apparent loss of mid bass that comes with a narrow baffle.

A flat frequency response across the crossover range, is often criticized as causing a "bright sounding" speaker. If the flat frequency response is accompanied by sufficient compensation (about 4-6 dB) at frequencies below the baffle step range, it results in a highly satisfying sound (in my experience) without any objectionable brightness.
 
Swerd

Swerd

Audioholic Warlord
Because the first reflection point is unique to every room. Hence why you would measure the drivers individually in an anechoic environment.
Modern measuring software can eliminate room reflections by adjusting the time range during which the measurements are taken. This is usually spoken of as the "time gate". Typical measurements made in a real room can mimic an anechoic environment by stopping the measurements about 4.5 milliseconds after the impulse starts.
 
gene

gene

Audioholics Master Chief
Administrator
"Editorial Note About the NRC Measurements
The listening window is a combination of 0, +/- 15 vertical, and +/- 15 and 30 horizontal (the NRC measurements used 15 deg increments, Harman uses 10 deg). This describes the average direct sound arriving at a group of listeners. It does not include the first reflections.

Sound power is a weighted average of all the curves, leading to an estimate of the total sound energy radiated over a 360 deg sphere. "

Wouldn't the first reflection points be considered as part of the room's response to the speaker's stimuli thus measuring the room's interaction rather than the loudspeaker itself?





No. If the measurement is done anechoic or gated, it will remove the room response.
 
gene

gene

Audioholics Master Chief
Administrator
Excellent feedback guys. Keep it coming. This was a pretty big effort as it was really meant to be a 5-10 min video interview with Hugo and it just blew up. The video is completely separate content from the article since we don't script our videos. Once the camera starts rolling, watch out! LOL.
 
krabapple

krabapple

Banned
A shame this useful article has such a schizophrenic attitude to DBTs (lots of anti-DBT snark, ending with 'but actually, DBTs are the way to go'). Really, why not just run these 'mythbustings' sections about DBTs before Drs Toole or Olive first? Here's a few points I thought were overwrought:

No, DBTs are not the 'single most abused term in audio' -- from my experience, hardly *any* manufacturers even claim to use them in the first place! Feel free to prove me wrong -- list all the ones *you* have in mind.

No, 'the argument' is NOT that the prettier/more famously branded loudspeaker will *ALWAYS* be preferred. It's simply a common bias that needs to be accounted for in sensory and product preference testing. Which, btw are sciences too; DBTs are NOT just the tool of medical research. There are textbooks devoted to sensory testing methods. They cover DBTs. You should look them up

No one I know of uses ABX tests for *loudspeakers* or for tests of preference generally. I can't recall seeing it recommended, either. There are many kinds of DBTs, suited for different purposes. ABX is recommended for testing claims of *difference*. So why even bring it up?

Instantaneous switching isn't confusing. What is more confusing for audio memory, is having a lag (NON-instantaneous switching) between stimuli. Minimizing the switching interval -- the time between the end of A and start of B -- is a *good thing* for increasing listener discrimination of small differences. Not as *big* an issue if differences are bigger, as tends to be the case for loudspeakers.

"Switching interval" and "length of musical sample (i.e. length of a trial)" are two different issues that appear to be confused in this article.

Obviously you can't level match loudspeakers the way you can amps or cables or CD players. No one claims you can, and I don't see 0.1 dB level matching being demanded of loudspeaker comparison anywhere.

Extended listening isn't a necessity, has its own drawbacks (noted in the article), and is arguably inferior to *trained* listening....a purpose of training is so reliable results can be obtained more quickly, because listeners can more quickly identify and articulate whatever differences are there. And IMO the 'pressure' aspect of a listening test is overrated -- espeically when you aren't dealing with small differences. If loudspeaker listeners feel they need extended listening to form their 'true' preference, fine, nothing intrinsic to double-blind testing prevents that. The important thing (from an accuracy standpoint) is to test, in the end, if the preference is likely due to the *sound alone*, and that means a test that *minimize biases from non-audible factors*. If the length of a blind test seems to short, I'd suggest you do your weeks of sighted comparative listening, form your preference -- *then* see if it holds up when you can't 'see' (literally or figuratively) the speakers. Should be a snap!



I'd like to see you publish 'your' listening comparison results formally, with detailed methods and tables, so I could rack them up against, say, Olive's and Toole's work on loudspeaker preference and performance, in terms of scientific rigor.
 
krabapple

krabapple

Banned
In my opinion, Double-Blind tests are to test the reviewer, not the speakers. People who claim that is the only way to do speaker testing are almost always doing so because they don't believe reviewers can tell the difference between products and they want them to prove themselves.
Pretty much, yes. Blind methods can indicate whether the *reviewer* is being influenced by something beyond just the sound of the loudspeaker.


Knowing a reviewer's preferences and style speaks more to me about their proficiency than any DBT or ABX test.

To me, that's really just another 'circle of confusion' to use Floyd Toole's term. Too many unknowns relying on each other.
 
gene

gene

Audioholics Master Chief
Administrator
A shame this useful article has such a schizophrenic attitude to DBTs (lots of anti-DBT snark, ending with 'but actually, DBTs are the way to go'). Really, why not just run these 'mythbustings' sections about DBTs before Drs Toole or Olive first? Here's a few points I thought were overwrought:

No, DBTs are not the 'single most abused term in audio' -- from my experience, hardly *any* manufacturers even claim to use them in the first place! Feel free to prove me wrong -- list all the ones *you* have in mind.

No, 'the argument' is NOT that the prettier/more famously branded loudspeaker will *ALWAYS* be preferred. It's simply a common bias that needs to be accounted for in sensory and product preference testing. Which, btw are sciences too; DBTs are NOT just the tool of medical research. There are textbooks devoted to sensory testing methods. They cover DBTs. You should look them up

No one I know of uses ABX tests for *loudspeakers* or for tests of preference generally. I can't recall seeing it recommended, either. There are many kinds of DBTs, suited for different purposes. ABX is recommended for testing claims of *difference*. So why even bring it up?

Instantaneous switching isn't confusing. What is more confusing for audio memory, is having a lag (NON-instantaneous switching) between stimuli. Minimizing the switching interval -- the time between the end of A and start of B -- is a *good thing* for increasing listener discrimination of small differences. Not as *big* an issue if differences are bigger, as tends to be the case for loudspeakers.

"Switching interval" and "length of musical sample (i.e. length of a trial)" are two different issues that appear to be confused in this article.

Obviously you can't level match loudspeakers the way you can amps or cables or CD players. No one claims you can, and I don't see 0.1 dB level matching being demanded of loudspeaker comparison anywhere.

Extended listening isn't a necessity, has its own drawbacks (noted in the article), and is arguably inferior to *trained* listening....a purpose of training is so reliable results can be obtained more quickly, because listeners can more quickly identify and articulate whatever differences are there. And IMO the 'pressure' aspect of a listening test is overrated -- espeically when you aren't dealing with small differences. If loudspeaker listeners feel they need extended listening to form their 'true' preference, fine, nothing intrinsic to double-blind testing prevents that. The important thing (from an accuracy standpoint) is to test, in the end, if the preference is likely due to the *sound alone*, and that means a test that *minimize biases from non-audible factors*. If the length of a blind test seems to short, I'd suggest you do your weeks of sighted comparative listening, form your preference -- *then* see if it holds up when you can't 'see' (literally or figuratively) the speakers. Should be a snap!



I'd like to see you publish 'your' listening comparison results formally, with detailed methods and tables, so I could rack them up against, say, Olive's and Toole's work on loudspeaker preference and performance, in terms of scientific rigor.
you obviously missed many key points in the article, particularly how DBTs are often manipulated or not run as DBT's. You also likely didn't notice that Dr. Floyd Toole was one of the peer reviewers and contributors of points that you criticize, but I'd expect nothing less given your earned forum reputation :rolleyes:
 
crossedover

crossedover

Audioholic Chief
IMHO double blind should be implemented with electronics not speakers. That's were I feel the most bias is. If speaker listening test are properly setup as per the article then evaluation should be easier to those that are truly concerned with it.
 
gene

gene

Audioholics Master Chief
Administrator
IMHO double blind should be implemented with electronics not speakers. That's were I feel the most bias is. If speaker listening test are properly setup as per the article then evaluation should be easier to those that are truly concerned with it.
Agreed. Though blind testing is likely good enough.

I'm still waiting for the car industry to suggest you must do blind driving tests to truly determine which car the consumer prefers without the bias of brand or aesthetics ;)

I have had countless people come into my theater room to participate in speaker shootouts. Most of them sitting 15-20 ft away from the speakers were unable to identify which speakers were playing, especially if the grilles were still on. Inexperienced listeners could care less about brand and most of these type of listeners never even commented on looks of the products b/c they didn't visually inspect them up close until the listening test was over.

That being said, I still like to run blind and sighted tests to observe any variances in preferences. NOTHING substitutes extended listening tests IMO. I like to capture as much data as possible between sighted, blind comparisons and extended listening tests of one pair at a time. At least I did this for myself when I was a consumer buying speakers whenever possible.

I often like something for a day or two and then find flaws over periods of time with anything whether it's a speaker, car, or woman (J/K I love my wife) ;)
 
Last edited:
krabapple

krabapple

Banned
you obviously missed many key points in the article, particularly how DBTs are often manipulated or not run as DBT's.
Noticed, hence the use of the word 'schizophrenic' to describe the article in my *very first sentence*

You also likely didn't notice that Dr. Floyd Toole was one of the peer reviewers and contributors of points that you criticize, but I'd expect nothing less given your earned forum reputation :rolleyes:

I did notice, that's exactly why I suggested you run the DBT sections by Toole (and I notice that Olive has contributed to forum threads, so he's suggested there too). Are you saying you did? Can you point him to my reply and get some feedback? I'd love to hear his views on those points.


I also notice you didn't actually address any of my points. ( It's OK, your reputation on this subject is what it is, too. )
 
dobyblue

dobyblue

Senior Audioholic
Typo comment - feel free to delete after reading

The bottom line is it’s (rarely true) and most of the time just plain nonsense.

Putting "rarely true" in brackets results in a garbled sentence. Read the sentence without the brackets and you'll see what I mean. You should eliminate the brackets altogether.
 
gene

gene

Audioholics Master Chief
Administrator
Noticed, hence the use of the word 'schizophrenic' to describe the article in my *very first sentence*




I did notice, that's exactly why I suggested you run the DBT sections by Toole (and I notice that Olive has contributed to forum threads, so he's suggested there too). Are you saying you did? Can you point him to my reply and get some feedback? I'd love to hear his views on those points.


I also notice you didn't actually address any of my points. ( It's OK, your reputation on this subject is what it is, too. )
Sean Olive did not publicly comment on this article as far as I am aware. He did give me some feedback initially and I made some changes accordingly. I have immense respect for Dr. Toole and Dr Sean Olive but it doesn't mean I always agree with their research or how Harman decides to trickle that research into their actual products.

What I say may not be very popular with some manufacturers but I base in on my 20+ years of experience in this field as well as my scientific training as an audio engineer that spent years in the telecom industry setting up controlled listening tests and also contributing to standards bodies and dealing with the politics therein. It is no coincidence that every speaker company that cherishes the blind test always wins their own shootouts. Though I notice that rhetoric has been wining down in the last few years since I've been writing about it. I rarely see loudspeaker companies use the term "similarly good" these days ;) Now they often claim they use blind testing to better understand perceptual differences product upgrades in their own products more so than to compare to other brands.
 
Last edited:
gene

gene

Audioholics Master Chief
Administrator
The bottom line is it’s (rarely true) and most of the time just plain nonsense.

Putting "rarely true" in brackets results in a garbled sentence. Read the sentence without the brackets and you'll see what I mean. You should eliminate the brackets altogether.
Please tell me where this is in the article so I can fix it. thanks.
 
krabapple

krabapple

Banned
Btw, labeling me as a 'forum troll idiot that just begs for attention' is such a classy move, Gene. Or is that Clint 'evolution? it's a hoax!' DeBoer's doing? Between this and that shadowbanning episode, I have to wonder....
 

Latest posts

newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top