JBL Synthesis CEDIA 2011 Demo Evaluation

T

todd.packer

Audioholic Intern
The Harman graphs are only 2 octaves long. I personally like to measure bass from 10Hz to 200Hz to see how the subs roll off on both ends and how they blend with the front channels. This is what I measure when calibrating systems but it doesn't look as smooth as a 2 octave wide measurement of course.
This graph is only the below 80 hz seat to seat variation graph, and has NO EQ or low pass filter applied yet. Just the subwoofer optimiziation Sound Feild Management. We don't normally even look at this graph except for the variation as the frequency correction is done in another seperate step.

We always look at the hi and low rolloff and all measurements are from 10hz-20khz.

-Todd
 
R

randyb

Full Audioholic
I have a question. Since horn "honkiness" is such a widespread assumption, how do people who listen sigthed consciously remove that bias? Same thing with metal domes being steely sounding etc. It would be interesting to me for a study of listeners who listen to recorded rooms (over quality headphones) with horns vs. say ribbon tweeters to metal tweeters to standard tweeters to identify each without knowing which is which.
 
T

todd.packer

Audioholic Intern
I have a question. Since horn "honkiness" is such a widespread assumption, how do people who listen sigthed consciously remove that bias? Same thing with metal domes being steely sounding etc. It would be interesting to me for a study of listeners who listen to recorded rooms (over quality headphones) with horns vs. say ribbon tweeters to metal tweeters to standard tweeters to identify each without knowing which is which.
If you really want to know how we do it, go here and watch this video. We do completely double blind listening tests of speakers every day. The results are surprisingly interesting.

http://seanolive.blogspot.com/2009/01/video-on-how-we-measure-loudspeaker.html
 
gene

gene

Audioholics Master Chief
Administrator
If you really want to know how we do it, go here and watch this video. We do completely double blind listening tests of speakers every day. The results are surprisingly interesting.
Todd;

Just wondering how Harman determines "best competitor". Is it based on marketshare or actual product performance for the competitors? If the latter, what criteria is used? Do product comparisons always keep competitor products at similar price classes to Harman's?

In your DBTs, who is setting up the tests? Who is interpreting the data?

My understanding from conversations with Sean Olive is you guys only do DBT's for JBL and Infinity but not Revel b/c it would be too cost prohibitive to purchase high end competitor speakers which Revel doesn't have the budget to do.
 
tonmeister

tonmeister

Audioholic
Todd;

Just wondering how Harman determines "best competitor". Is it based on marketshare or actual product performance for the competitors? If the latter, what criteria is used? Do product comparisons always keep competitor products at similar price classes to Harman's?

In your DBTs, who is setting up the tests? Who is interpreting the data?

My understanding from conversations with Sean Olive is you guys only do DBT's for JBL and Infinity but not Revel b/c it would be too cost prohibitive to purchase high end competitor speakers which Revel doesn't have the budget to do.

I think can answer that question. Typically, the Harman marketing/product managers determine who the competitors are? In the "old-days" this would often be competitors that we go head-to-head in US stores like Circuit City,etc, and a different set of competitors for stores where our products are sold in Europe And yes, sometimes a competitor's model is included that is believed to have exceptional performance in its price category. Usually the competitors are chosen based on price class. We will often include a competitor's speaker in a test that is well-outside the price of the Harman product being tested to see how well it stands up. We often find that a loudspeaker's price is not a reliable indicator of its sound quality. Floyd Toole has some data in his book that illustrates that, and the data exists in my papers.

I'm not sure the retail store criterion is relevant anymore since many products are researched and/or purchased on the internet, and never even auditioned in a store before it is purchase. Arguably, it matters more today what is written about the performance of the product on the internet in audio forums like this one. The audio company or magazine who can show perceptually-relevant acoustical measurements and subjective data on products will certainly stand out, and be better equipped to sell their products based on performance.

Gene, I don't think I ever said that Revel doesn't do competitive benchmarking. That is not true. It is true that the number of competitive products in that high-end price category tested, is restricted by budgets. We do have a B&W 800D, for example, that was purchased for competitive benchmarking of Revels, and high-end JBL's and Infinity's.

I wish I knew of a way I could rent or borrow expensive loudspeakers for a day or two for purposes of doing acoustical measurements and double-blind competitive benchmarking tests. I am open ears for suggestions?? :)
 
GO-NAD!

GO-NAD!

Audioholic Spartan
I'm not sure the retail store criterion is relevant anymore since many products are researched and/or purchased on the internet, and never even auditioned in a store before it is purchase. Arguably, it matters more today what is written about the performance of the product on the internet in audio forums like this one. The audio company or magazine who can show perceptually-relevant acoustical measurements and subjective data on products will certainly stand out, and be better equipped to sell their products based on performance.
At risk of sending this thread off on a tangent, I believe that you've brought up an important point. As more and more loudspeakers are being purchased on-line, the publication of objective standardized measurements and DBT results becomes more critical in making educated purchasing decisions.

I would also suggest that even the most ardent objectivists amongst us would say that the proof in the pudding is in the listening. But, who wants to order, and then possibly ship back 3 or 4 pairs of speakers - perhaps complete surround systems - after home auditioning? As far as I know, Aperion is the only company that will pay shipping both ways; so not only would it be a PITA, it could quite expensive. This would certainly negate any supposed savings in ID purchasing.
 
GranteedEV

GranteedEV

Audioholic Ninja
I wish I knew of a way I could rent or borrow expensive loudspeakers for a day or two for purposes of doing acoustical measurements and double-blind competitive benchmarking tests. I am open ears for suggestions?? :)
Since it's not like you're publishing this data anyways, you could probably start threads on various forums asking owners if they were interested in seeing what their speakers measured like in an anechoic chamber and how they performed in controlled listening tests. The 600lb custom setup Wilson etc stuff might be out of the question but we already know that stuff measures bad and has a sweet spot the size of a needle :D

Maybe we could get ADTG to bring his Orions :cool: ? I'd be interested in seeing how a modern dynamic dipole fares.
 
Last edited:
its phillip

its phillip

Audioholic Ninja
Where are the harman test facilities located? Just have everybody make a road trip and whoever has high end speakers should bring them along :D
 
gene

gene

Audioholics Master Chief
Administrator
I think can answer that question. Typically, the Harman marketing/product managers determine who the competitors are? In the "old-days" this would often be competitors that we go head-to-head in US stores like Circuit City,etc, and a different set of competitors for stores where our products are sold in Europe And yes, sometimes a competitor's model is included that is believed to have exceptional performance in its price category. Usually the competitors are chosen based on price class. We will often include a competitor's speaker in a test that is well-outside the price of the Harman product being tested to see how well it stands up. We often find that a loudspeaker's price is not a reliable indicator of its sound quality. Floyd Toole has some data in his book that illustrates that, and the data exists in my papers.
That may be true with poorly designed speakers. But its pretty easy to identify the good from the bad by looking at the parts used, impedance measurements and on/off axis frequency response and how the speaker misbehaves at its output limits.

Here is part 1 of 4 of a series of articles we are working on called:
Identifying a Legitimately High Fidelity Loudspeaker
(Part 2 should post in about 1-2 weeks)

There is one particular company that claims to run DBT's and make statements such as "their speakers are similarly good to even the most expensive speakers in the world". They go so far as to say that "above $1500/pair you simply cannot improve the fidelity of the speakers, only appearances". Surely you guys don't believe that else why even have the Revel brand when JBL/Infinity would be "similarly good" for fractions of the cost, right?

I'm not sure the retail store criterion is relevant anymore since many products are researched and/or purchased on the internet, and never even auditioned in a store before it is purchase. Arguably, it matters more today what is written about the performance of the product on the internet in audio forums like this one. The audio company or magazine who can show perceptually-relevant acoustical measurements and subjective data on products will certainly stand out, and be better equipped to sell their products based on performance.
Very true and great points.

Gene, I don't think I ever said that Revel doesn't do competitive benchmarking. That is not true. It is true that the number of competitive products in that high-end price category tested, is restricted by budgets. We do have a B&W 800D, for example, that was purchased for competitive benchmarking of Revels, and high-end JBL's and Infinity's.
Ok thanks for the clarity on that point and I apologize if I misstated/misunderstood our conversation.

I wish I knew of a way I could rent or borrow expensive loudspeakers for a day or two for purposes of doing acoustical measurements and double-blind competitive benchmarking tests. I am open ears for suggestions??
I may have a solution for this that involves a cooperative effort between Harman and Audioholics. First I want to understand your "DBT" procedure a bit more.

1. Who sets up your DBTs?
2. What are the measured losses from the screen that covers all the speakers?
3. How far away are the speakers placed from the screen?
4. What are the acoustical conditions of the listening room?
5. Does the user get to select their own music?
6. Does the user get to switch between speakers on the fly?
7. Does the user get to adjust volume level on the fly while still keeping all speakers level matched?
8. What type of data is collected regarding the users listening experience?
9. Who analyzes the data?
10. How much time does the user spend comparing the products?

I apologize for all the questions. But, what always fascinates me is every company that swears to adhere to a strict DBT protocol always seems to win or at worst case tie the other speakers in their own internally run tests. They all seem to have perfected an allegedly bias free testing methodology they can NEVER lose. Best of all, they found a way to win (or at least tie) their internally run tests often using the cheapest parts and minimalist designs in their products.

While I admire taking a scientific approach to testing and evaluating loudspeakers, I feel everyone must be careful in analyzing their testing procedures and results, else false conclusions can easily be reached. Biases always exist in tests, even when run blind. It is identifying these biases that helps better understand the results they produced.

We have conducted our own blind tests in the past and to date, none of the brands that claim victory in their own DBT's even win first place. They are often very competitive for products pitted against others in their price class however.

Interestingly in our experience, some of the prettiest speakers often perform poorly in both sighted and blind tests so that kind of debunks the argument that the listener will gravitate towards the prettiest speaker. Plus I often wonder what opinion manufacturers have about the appearances of their own products when they worry that a sighted test may bias the listener to the better looking speaker? Does that mean they think their speakers are ugly? If so, why not make their speakers even prettier since people either purchase online or on a showfloor in a sighted test anyways.

I have found the casual listener will usually prefer the speaker that has more bass and treble in a direct comparison, even when level matched. This is why the listener should be educated on accurate sound reproduction and also have repeated and extended listening sessions to ensure they don't experience listening fatigue. So few listeners have ever had a truly good reference of sound. Rarely do they ever experience a live UNAMPLIFIED acoustical performance. Sadly, most think a rock concert with a stack of line array speakers in a stadium is considered reference level performance.

We broach this topic in the article:
The Dumbing Down of Audio
 
Last edited:
tonmeister

tonmeister

Audioholic
Where are the harman test facilities located? Just have everybody make a road trip and whoever has high end speakers should bring them along :D
Northridge, CA. I'll buy you lunch if you bring your high-end speakers :)
 
its phillip

its phillip

Audioholic Ninja
Awesome. Unfortunately I don't have any high end speakers to bring :)
 
tonmeister

tonmeister

Audioholic
I may have a solution for this that involves a cooperative effort between Harman and Audioholics. First I want to understand your "DBT" procedure a bit more.

1. Who sets up your DBTs?
2. What are the measured losses from the screen that covers all the speakers?
3. How far away are the speakers placed from the screen?
4. What are the acoustical conditions of the listening room?
5. Does the user get to select their own music?
6. Does the user get to switch between speakers on the fly?
7. Does the user get to adjust volume level on the fly while still keeping all speakers level matched?
8. What type of data is collected regarding the users listening experience?
9. Who analyzes the data?
10. How much time does the user spend comparing the products?

I apologize for all the questions. But, what always fascinates me is every company that swears to adhere to a strict DBT protocol always seems to win or at worst case tie the other speakers in their own internally run tests. They all seem to have perfected an allegedly bias free testing methodology they can NEVER lose. Best of all, they found a way to win (or at least tie) their internally run tests often using the cheapest parts and minimalist designs in their products.
Regarding your specific questions:

1. The tests are set up by a technician who has been trained by me.

2. We've measured the acoustical effects of the curtain both anechoically and in the room with the grille cloth at different distances and angles of incidence from the speaker (both variables influence the measured response). The effect is pretty negligible (< 1 dB) at very high frequencies.

3. The distance of the speaker from the curtain depends on the products we are testing (some products are designed to be against/near the wall) but is typically ~ 1.5 ft.

4. The acoustical conditions of the MLL and Reference Rooms are well documented in these AES papers.
http://www.aes.org/e-lib/browse.cfm?elib=8338
http://www.aes.org/e-lib/browse.cfm?elib=14873

5. Program material is an experimental nuisance variable and most be properly controlled. We have some standard program material for formal tests that all listeners use for rating the loudspeakers. This is necessary to allow comparison/pooling of listener data. Listeners are trained and intimately familiar with the material. For informal tests, listeners can choose their program.

6. Yes, there is only 1 listener in the room per test and they control the switching and pace of the test.

7. Absolute and relative levels are listening test nuisance variables that must be properly controlled. In formal tests, the relative level of the speakers are matched for equal loudness, and the absolute level is fixed. We have the ability to give listener control of absolute volume but this is only done in informal tests or to test the dynamic limits of the speaker. I am working on a method to measure the maximum acceptable level of speakers via a method of adjustment where absolute volume is varied.

8. For subjective data we get measures of overall preference, perceived spectral balance, nonlinear distortion/dynamics, spatial attributes, and general comments. Our listening test software allows us to easily add any defined attribute or scale to the test. It is just a matter of ensuring the listener is properly trained to understand what that attribute means and how to use the scale. We do this in listener training.

9. We have software that automatically statistically analyzes and graphs the data in real-time on a web browser. It includes repeated measures ANOVA, post-hoc tests, and metrics on listener performance. For more thorough multi-variate type analysis we have software statistical packages that we have.

10. There are no time limits on the listening test, so a listener can take 20 minutes or 20 hours. Most listeners complete the test in 20-30 minutes. Beyond that, fatigue becomes an issue.
 
gene

gene

Audioholics Master Chief
Administrator
1. The tests are set up by a technician who has been trained by me.

2. We've measured the acoustical effects of the curtain both anechoically and in the room with the grille cloth at different distances and angles of incidence from the speaker (both variables influence the measured response). The effect is pretty negligible (< 1 dB) at very high frequencies.

3. The distance of the speaker from the curtain depends on the products we are testing (some products are designed to be against/near the wall) but is typically ~ 1.5 ft.

4. The acoustical conditions of the MLL and Reference Rooms are well documented in these AES papers.
http://www.aes.org/e-lib/browse.cfm?elib=8338
http://www.aes.org/e-lib/browse.cfm?elib=14873

5. Program material is an experimental nuisance variable and most be properly controlled. We have some standard program material for formal tests that all listeners use for rating the loudspeakers. This is necessary to allow comparison/pooling of listener data. Listeners are trained and intimately familiar with the material. For informal tests, listeners can choose their program.

6. Yes, there is only 1 listener in the room per test and they control the switching and pace of the test.

7. Absolute and relative levels are listening test nuisance variables that must be properly controlled. In formal tests, the relative level of the speakers are matched for equal loudness, and the absolute level is fixed. We have the ability to give listener control of absolute volume but this is only done in informal tests or to test the dynamic limits of the speaker. I am working on a method to measure the maximum acceptable level of speakers via a method of adjustment where absolute volume is varied.

8. For subjective data we get measures of overall preference, perceived spectral balance, nonlinear distortion/dynamics, spatial attributes, and general comments. Our listening test software allows us to easily add any defined attribute or scale to the test. It is just a matter of ensuring the listener is properly trained to understand what that attribute means and how to use the scale. We do this in listener training.

9. We have software that automatically statistically analyzes and graphs the data in real-time on a web browser. It includes repeated measures ANOVA, post-hoc tests, and metrics on listener performance. For more thorough multi-variate type analysis we have software statistical packages that we have.

10. There are no time limits on the listening test, so a listener can take 20 minutes or 20 hours. Most listeners complete the test in 20-30 minutes. Beyond that, fatigue becomes an issue.
That's cool you have a blind screen with such little losses. In the past we used speaker grille cloth tightly stretched across the room and found considerable measurable differences seen here:



As you can see there is considerably more loss than 1dB even down to 4kHz. It also matters how far away the speaker is from the grille cloth. The further away, the more diffraction and associated losses. Obviously you guys have a much better handle on this and I admire your adherence to a very precise and tight toleranced testing procedure.

As a result the "brighter" speaker performed better under such a test condition. I started blind folding people instead of masking the speakers but its challenging to get them to write results in a scorecard they can't see.

What I usually do is get their impression this way under extended listening sessions and the results are pretty interesting.

That's cool you guys use ANOVA. I'd like to perhaps discuss how you interpret the results of the crunched data outside of this thread as well as determine what your null hypothesis is prior to running these tests.

I can likely find at least 3-4 companies willing to submit you product if you are willing to fly me up to run the test. This could be quite educational for all parties involved and it would certainly be a lot of fun. I'd love to hear some Revels and meet all the fine folks at Harman :D
 
Last edited:
R

riker1384

Junior Audioholic
5. Program material is an experimental nuisance variable and most be properly controlled. We have some standard program material for formal tests that all listeners use for rating the loudspeakers. This is necessary to allow comparison/pooling of listener data. Listeners are trained and intimately familiar with the material. For informal tests, listeners can choose their program.
How do you know that the recordings you use are accurate and that you aren't biasing the test by picking music that sounds good on Harman speakers or your reference speakers?

By the way, I heard that Harman is moving their speaker engineering to China. Is this going to change the design approach or the evaluation process?
 
newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top