I think can answer that question. Typically, the Harman marketing/product managers determine who the competitors are? In the "old-days" this would often be competitors that we go head-to-head in US stores like Circuit City,etc, and a different set of competitors for stores where our products are sold in Europe And yes, sometimes a competitor's model is included that is believed to have exceptional performance in its price category. Usually the competitors are chosen based on price class. We will often include a competitor's speaker in a test that is well-outside the price of the Harman product being tested to see how well it stands up. We often find that a loudspeaker's price is not a reliable indicator of its sound quality. Floyd Toole has some data in his book that illustrates that, and the data exists in my papers.
That may be true with poorly designed speakers. But its pretty easy to identify the good from the bad by looking at the parts used, impedance measurements and on/off axis frequency response and how the speaker misbehaves at its output limits.
Here is part 1 of 4 of a series of articles we are working on called:
Identifying a Legitimately High Fidelity Loudspeaker
(Part 2 should post in about 1-2 weeks)
There is one particular company that claims to run DBT's and make statements such as "t
heir speakers are similarly good to even the most expensive speakers in the world". They go so far as to say that "
above $1500/pair you simply cannot improve the fidelity of the speakers, only appearances". Surely you guys don't believe that else why even have the Revel brand when JBL/Infinity would be "similarly good" for fractions of the cost, right?
I'm not sure the retail store criterion is relevant anymore since many products are researched and/or purchased on the internet, and never even auditioned in a store before it is purchase. Arguably, it matters more today what is written about the performance of the product on the internet in audio forums like this one. The audio company or magazine who can show perceptually-relevant acoustical measurements and subjective data on products will certainly stand out, and be better equipped to sell their products based on performance.
Very true and great points.
Gene, I don't think I ever said that Revel doesn't do competitive benchmarking. That is not true. It is true that the number of competitive products in that high-end price category tested, is restricted by budgets. We do have a B&W 800D, for example, that was purchased for competitive benchmarking of Revels, and high-end JBL's and Infinity's.
Ok thanks for the clarity on that point and I apologize if I misstated/misunderstood our conversation.
I wish I knew of a way I could rent or borrow expensive loudspeakers for a day or two for purposes of doing acoustical measurements and double-blind competitive benchmarking tests. I am open ears for suggestions??
I may have a solution for this that involves a cooperative effort between Harman and Audioholics. First I want to understand your "DBT" procedure a bit more.
1. Who sets up your DBTs?
2. What are the measured losses from the screen that covers all the speakers?
3. How far away are the speakers placed from the screen?
4. What are the acoustical conditions of the listening room?
5. Does the user get to select their own music?
6. Does the user get to switch between speakers on the fly?
7. Does the user get to adjust volume level on the fly while still keeping all speakers level matched?
8. What type of data is collected regarding the users listening experience?
9. Who analyzes the data?
10. How much time does the user spend comparing the products?
I apologize for all the questions. But, what always fascinates me is every company that swears to adhere to a strict DBT protocol always seems to win or at worst case tie the other speakers in their own internally run tests. They all seem to have perfected an allegedly bias free testing methodology they can NEVER lose. Best of all, they found a way to win (or at least tie) their internally run tests often using the cheapest parts and minimalist designs in their products.
While I admire taking a scientific approach to testing and evaluating loudspeakers, I feel everyone must be careful in analyzing their testing procedures and results, else false conclusions can easily be reached. Biases always exist in tests, even when run blind. It is identifying these biases that helps better understand the results they produced.
We have conducted our own blind tests in the past and to date, none of the brands that claim victory in their own DBT's even win first place. They are often very competitive for products pitted against others in their price class however.
Interestingly in our experience, some of the prettiest speakers often perform poorly in both sighted and blind tests so that kind of debunks the argument that the listener will gravitate towards the prettiest speaker. Plus I often wonder what opinion manufacturers have about the appearances of their own products when they worry that a sighted test may bias the listener to the better looking speaker? Does that mean they think their speakers are ugly? If so, why not make their speakers even prettier since people either purchase online or on a showfloor in a sighted test anyways.
I have found the casual listener will usually prefer the speaker that has more bass and treble in a direct comparison, even when level matched. This is why the listener should be educated on accurate sound reproduction and also have repeated and extended listening sessions to ensure they don't experience listening fatigue. So few listeners have ever had a truly good reference of sound. Rarely do they ever experience a live UNAMPLIFIED acoustical performance. Sadly, most think a rock concert with a stack of line array speakers in a stadium is considered reference level performance.
We broach this topic in the article:
The Dumbing Down of Audio