Erin's CEA-2010 Subwoofer Testing Master Thread

ErinH

ErinH

Audioholic General
Hey, folks. You may have seen my other thread on ASR (for those who are members there) gauging interest in a "budget subwoofer shootout" using the CEA-2010 standard. Based on the positive feedback, I ordered (5) 10-inch powered subwoofers.

After conducting the tests I thought "Hey, this wasn't so bad. I could do this for other subs". And thus, the evolution of a simple shootout to what I plan and hope will turn in to a master database of subwoofer testing.

First, however, I think it is important for me to lay the groundwork for this testing and provide some insight in to my testing setup and methods. Therefore, I have created the video below as my kickoff. I think some insight in to this will help people understand just exactly what is being tested and how the results are compartmentalized for sharing with the world.






That's it. Below is the link to my google sheet where I will be keeping all of this information update. I hope you all enjoy.






Erin's Master Sheet of CEA-2010 Results:

https://docs.google.com/spreadsheets/d/18bz7z-xIlRJsC-bw6k4mHkuwv_uiGAMyEhgrTkjwdXc/edit?usp=sharing


If you have a question, read the notes. They are there for a reason and may answer your question. If you want x/y/z... I probably won't provide it. I know, that sounds rude. But I want to keep this simple. I have added and removed countless formulas, sheets, and graphs all because I thought "yea, it would be cool to have but the average person won't need it and they'll get confused". Remember, the people who are likely to pay attention to these numbers may not care about all the fun things you can do with the data. They just want the results.

I initially had A & B on the same sheet. I thought it would be easy to navigate and understand. I sent it to some friends who aren't familiar with the specs and they were confused because I had so many little "notes" because A & B don't have the same frequency set. So, I split the results out to separate sheets.

Also, read the Foreword. It explains the method, the difference between A & B and why I will or will not provide certain data.


Contribute:
If you like what you see here and want to help me keep it going, please consider donating or purchasing a shirt via my Contribute page located here. So, donations help me pay for new items to test, hardware to build test rigs (some speakers require different test stands), miscellaneous items and costs of the site's server space and bandwidth. All of which I otherwise pay out of pocket. So, if you can help chip in a few bucks, know that it's very much appreciated. I've reached out to various manufacturers to ask for test samples for review (with the promise to send them back). So far, none have even replied. Or, if you have a subwoofer you'd like to have tested, please contact me. If you can cover shipping both ways that would be great but maybe with enough donations we can take care of that or help offset the costs.
 
Last edited:
S

shadyJ

Speaker of the House
Staff member
Hello Erin,

A couple things to be aware of:
Firstly, you are using CEA-2010-B to test. The 'B' version of that test is not really comparable with results from other testers. Audioholics, Data-bass.com, and Brent Butterworth's reviews all use CEA-2010-A, as well as many manufacturers. The distortion thresholds are different for 'B', in fact, they are quite a bit more lenient for deeper bass distortion. It allows for ridiculous levels of distortion. When the consumer technology association sent out an early copy for peer review, they got a lot of push back on this, but they went ahead and implemented it anyway, but most people who actually have to deal with subwoofer performance will tell you CEA-2010-B sucks. If I were you, I would look into adopting CEA-2010-A, not CEA-2010-B; that would make your gathered data comparable with that taken by other testers. REW allows you to test with CEA-2010-A. That brings me to my next point:

You should understand that different software yields different results, even when adhering to the same protocol. Some testers use CLIO system, some use Don Keele's module for IGOR Pro, some use REW. They all give slightly different results, even where everything else is the same. The method and speed at which the test signal is being captured affect the results. Drivers and audio interface software also affects the results. I don't know how seriously you are taking all of this subwoofer testing, but you should try different audio interfaces and testing software to see what can happen. If you see some equipment or software combination that is getting you dramatically different results than others, you know there is likely a problem there.

There is a bunch of other stuff I could mention, but for now I will just say that the protocol is not perfect and sometimes it will not accurately reflect a subwoofer's performance if you follow it strictly. You should do what you can to make sure that a subwoofer's performance is fairly recorded rather than strictly following protocol.
 
lovinthehd

lovinthehd

Audioholic Jedi
I'll try and muster the will to watch yet another video instead of a well written article....
 
ErinH

ErinH

Audioholic General
I'll try and muster the will to watch yet another video instead of a well written article....
I do both. Both suck. But explaining and helping people visualize a test setup is much easier in a video than it is to type it up and provide various pictures.
 
lovinthehd

lovinthehd

Audioholic Jedi
y
I do both. Both suck. But explaining and helping people visualize a test setup is much easier in a video than it is to type it up and provide various pictures.
Glad to read the article....I understand easier, hazard of the times.
 
ErinH

ErinH

Audioholic General
Hello Erin,

A couple things to be aware of:
Firstly, you are using CEA-2010-B to test. The 'B' version of that test is not really comparable with results from other testers. Audioholics, Data-bass.com, and Brent Butterworth's reviews all use CEA-2010-A, as well as many manufacturers. The distortion thresholds are different for 'B', in fact, they are quite a bit more lenient for deeper bass distortion. It allows for ridiculous levels of distortion. When the consumer technology association sent out an early copy for peer review, they got a lot of push back on this, but they went ahead and implemented it anyway, but most people who actually have to deal with subwoofer performance will tell you CEA-2010-B sucks. If I were you, I would look into adopting CEA-2010-A, not CEA-2010-B; that would make your gathered data comparable with that taken by other testers. REW allows you to test with CEA-2010-A. That brings me to my next point:

You should understand that different software yields different results, even when adhering to the same protocol. Some testers use CLIO system, some use Don Keele's module for IGOR Pro, some use REW. They all give slightly different results, even where everything else is the same. The method and speed at which the test signal is being captured affect the results. Drivers and audio interface software also affects the results. I don't know how seriously you are taking all of this subwoofer testing, but you should try different audio interfaces and testing software to see what can happen. If you see some equipment or software combination that is getting you dramatically different results than others, you know there is likely a problem there.

There is a bunch of other stuff I could mention, but for now I will just say that the protocol is not perfect and sometimes it will not accurately reflect a subwoofer's performance if you follow it strictly. You should do what you can to make sure that a subwoofer's performance is fairly recorded rather than strictly following protocol.
Yea, I have a lot of thoughts on everything you just said. But I'll keep my personal opinions to myself.

Still, you'd think if there's a standard that a) people would follow it and b) it would make sense. Those two conflict and you get what we have in the industry; various takes on how a test "should" be performed and various software that can lead to varying results (outside of the given environment differences). And then we have CTA-2034's max spl tests... No wonder the general public is so confused. The dang CTA/CEA/whatever they are can't even provide a single spec. Oh, and then if I want to read the spec I have to pay $92. In the world of industry standards, it blows my mind that the industry would make each other pay for a specification. In my mind, that's kind of like saying "if you want to do what we tell you to do, you'll have to pay us to tell you how to do it"... no wonder some people don't bother reading the updates. Ugh.

On one hand, I like that 'B' is weighted per frequency. Geddes' & Lee's research is heavily weighted (no pun intended) toward the masking aspect. And my own years' worth of non-linear distortion testing has proven to me that not all distortions are equal and that music as the stimulus is best to help determine these things but since that can't be used we need to do our legwork to provide multitone, IMD and HD testing (all three are tests I provide). Even still, they can't guarantee with 100% accuracy that what is in the data will translate to what one hears because of the immense amount of variables. I also like that 'B' added the higher frequencies. I know some say that's inconsequential but to me it is important as it provides an idea of linearity. If a speaker has high inductance and/or high Bl assymetry/non-linearities that will show up in the midrange. For a subwoofer where they always play at/below Fs, this is even more important because the linearity will tell us just how good the motor design is and if that motor design has features like shorting rings to lower the inductance swing. Not to mention the fact that it is easier to blend a speaker to another one when both are linear because you can more easily control the integration when you aren't fighting high delay values and don't have features like an all-pass or variable phase (another all-pass). I deal with this kind of thing all the time when tuning systems. People don't seem to grasp how incredibly, incredibly important subwoofer linearity is not just within the typical passband but also outside of it.

Anyway, I appreciate your feedback. I actually didn't mean to have "-B" in my YT video title. I intended to keep it more open-ended because the stimulus and how it's processed to arrive at a value isn't terribly important in the demo. What is important is that people understand the setup environment and have an understanding of what the test is like. I'll give this more thought and I'll re-test if I decide to fall back to the "-A" standard.
 
S

shadyJ

Speaker of the House
Staff member
CEA-2010 wasn't created to adhere to audible distortion thresholds. While distortion audibility was a factor in their thresholds, the main criteria are approximately what kind of distortions are seen when the driver leaves Xmax. Those thresholds have more to do with mechanical limits rather than anything psychoacoustic. The CEA committee didn't pay that much attention to the research on the matter, and I would be surprised if Geddes' research was a big factor if it was a factor at all. I know they were aware of Eric Benjamin and Louis Fielder's paper, and that was probably the extent of weighting per frequency. While the idea of weighting per frequency is not bad and would be a step in the right direction, their thresholds are so absurd that it hardly matters. The distortion is at threshold limits is ridiculously audible, so much so that it completely swamps out the fundamental. The sound of it is just nuts and is well past any test of fidelity. Anyway, if you haven't already heard how bad it can get, you certainly will with that selection of subs.
 
S

shadyJ

Speaker of the House
Staff member
Look at the subs in his video, they are inexpensive ones from Klipsch, Polk, etc. They are going to produce a lot of distortion at high drive levels, especially at low frequencies.
 
mazersteven

mazersteven

Audioholic Warlord
Would have liked to see these

Parts Express Dayton 10"

Monoprice

SVS PB1000

RSL Speedwoofer

Fluance DB10
 
Last edited:
ErinH

ErinH

Audioholic General
Would have liked to see these

Parts Express Dayton 10"

Monoprice

SVS PB1000

RSL Speedwoofer

Fluance DB10
I would, too. But seeing as how I am paying for these myself, well, you get what you get.

If you or anyone else wants to provide the means to test those (either through donation or loan) then I would be happy to add them to my ongoing testing.

I have reached out to both PE and Monoprice to see if they would be willing to send review samples. Crickets so far.

And, FWIW, when I do contact manufacturers I *never* ask for anything for free to keep. I always specifically ask for review samples. Loaners.
 
Last edited:
ErinH

ErinH

Audioholic General
James, I appreciate your input. I thought it over last night. I still see merit in using -B. Though, I don't have any arguments against using -A. I'm personally leaning toward a re-test of the subs I have on hand with -A. But I'm going to give it some more thought. I did reach out to Brent via Facebook to ask his opinion. I'm going to guess he recommends -A as well.

Thanks again for taking the time to interject and giving me something to chew on.
 
ErinH

ErinH

Audioholic General
Did I miss somewhere which Subs he's using?
Also, the reason I am calling this a "master thread" is because I plan to continue this going forward. But, as I indicated above, what I am able to test depends on what I can afford and what I am able to receive on loan for testing.
 
ErinH

ErinH

Audioholic General
the main criteria are approximately what kind of distortions are seen when the driver leaves Xmax. Those thresholds have more to do with mechanical limits rather than anything psychoacoustic.
FWIW, the IEC 62458 standard for drive units is set in a way to define linear xmax at 10% HD/IMD thresholds. That's what Klippel uses. And it's done so at Fs. Where many manufacturers will define it at will using various geometric methods. Therefore, when talking about mechanical limits, it's important to understand there is an actual industry standard that defines this already using these criteria. There is also a more relaxed, accepted method of using 20% THD for subwoofers, made popular by Patrick Turnmire of Red Rock Acoustics. You can find an example of testing and both metrics applied to an 8" woofer I recently tested here if you are interested:




Now, as for the -B standard having lower threshold values for lower frequencies, I think it's also important to note that while this is true, -B also provides more stringent thresholds for 40-80Hz than -A does. And much higher thresholds for 80Hz-160Hz than -A. And, logically, this makes sense.

I had to dig up a thread on ASR but this guy's image does a great job showing the differences. When seen, I think it's pretty eye-opening. And while I understand you (and possibly others) don't like the lowered thresholds for low frequencies set forth by -B, I actually do like the heightened thresholds for the higher frequencies given masking of distortion.

For those who do not know, here is the -A spec:
All frequencies have the same thresholds:
Fundamental = 0
2nd order = -10
3rd order = -20
4th order = -25
5th order = -35
6th order = -45

And looks like this (from Audioholics):




-B is varied by frequency and typing those up is a royal pain, so let me quote the user on ASR:

Here are the harmonic distortion limits from ANSI/CTA-2010-B. The test signals are 1/3 octave band-limited tone bursts from 20-160 Hz. Band 1 is 20-32 Hz, band 2 is 40-63 Hz and band 3 is 80-160 Hz. The center frequencies of the 1/3 octave bands are 20, 25, 32, 40, 50, 63, 80, 100, 125, and 160 Hz.

It is worth noting that -B doesn't call out values for <20Hz. Which makes me wonder... does -A? Or have reviewers been simply applying the thresholds to the lower frequencies at will (and for that matter, to higher frequencies since -A only goes up to 80Hz, IIRC)? If the latter, IMHO, that's where the line between "Specification" and "I'm gonna do what I think should be done" is clearly crossed. As a fellow reviewer, I understand that our own opinions on what we think are best is often the winner. But when providing data as an adherence to a particular specification (via calling it out by name), it is important to note variance from said spec.

I'm personally not dead-set on either. I see and understand the merit of both. But, as I said previously, I like the weighting of -B. It's logical, given my own experiences of distortion masking.

I have emailed Klippel to ask their thoughts on this subject; to see what their knowledge of how the 2010 standard is applied in industry (aside from hobbyist level reviewers such as myself and others).
 
Last edited:
S

shadyJ

Speaker of the House
Staff member
I would put more emphasis on distortion in lower frequencies than higher frequencies. I understand that lower frequencies do better at auditory masking than higher frequencies, but you have to keep in mind that subs tend to have much cleaner output as frequencies go up since, obviously, less excursion is needed to produce the same SPL. Distortion plagues the low end of the spectrum, not the high end.

And for all the ideas about lower frequency masking, distortion is still grossly audible down there. It just does not tend to be a problem at higher frequencies. Furthermore, not many users even use crossover frequencies higher than 80Hz. As I said before, changing distortion thresholds based on frequency might have been a good idea, but it is poorly implemented in CEA-2010-B. The allowed levels of distortion is ridiculous.

I would definitely be testing lower than 20Hz as well. Think about what your readers will want. That is a frequency range they are interested in, and it is one where there are major differences in performance. While budget subs won't be doing much down there, there are plenty of subs that do target performance in that range.
 
ErinH

ErinH

Audioholic General
I spoke with Brent Butterworth last night. He gave me some good insight. In a nutshell, he still prefers -A but seemed to not be staunchly against -B. Personally, I like -B. But comparison against others' -A testing is nice to have as well.

Therefore, I have decided that I will provide both -A and -B spec results. It will take more time. But it will cover me for both *general* comparison against others' tests and future proof me if -B takes over and people begin adopting it more fully.

Since the spec for -A doesn't specify thresholds for anything other than 20-63Hz, I will do the same as others have done and use the same thresholds for other frequencies. But, I will make sure to note those are not within the purview of the -A standard and are thus provided as "extra" with those thresholds.

AFAIK, -B doesn't specify thresholds for <20Hz. I have contacted Klippel about their advice on this.

So, I will re-test the subwoofers I have on hand with -A this week and then I will post the data for both standards' results.

- Erin
 
Last edited:
ErinH

ErinH

Audioholic General
Alright! Testing is done for the first 5 subwoofers. I used BOTH CEA-2010-A and CEA-2010-B.

I updated the OP to reflect the master sheet. But here it is again.




The 5 subwoofers I tested this round are pictured below. From left to right, facing forward are:
  • Yamaha NS-SW100BL
  • Polk Audio PSW10
  • Klipsch Reference R-10SW
  • Sony SACS9
  • Elac SUB1010


They're basically all midbass modules, IMHO.
The Elac seems to be the clear winner here in terms of value. It has the most linear response. It also has some of the best numbers (behind the Polk in the -A testing, but the leader in the -B testing). And it's the cheapest.

View attachment 83084





As far as value, let's look at -A results:
The Polk wins out here at $1.17/dB. I've ranked the Elac value as $1.20/dB. The next closest value is $1.49/dB (Sony).

-B results for value:
Elac at $1.21/dB. Next best is Polk at $1.22/dB.

The Polk and the Elac are neck and neck as far as value goes. But when you look at the frequency response, again, the Elac wins out. Not to mention the Elac absolutely wins the "output per size" category.




The Klipsch has more output capability on the lowest octaves in the -B test thresholds but it couldn't pass the 20Hz test and didn't pass any of the -A tests so... bleh.


DSC06551.JPG
DSC06552.JPG
DSC06553.JPG
 
Last edited:
KEW

KEW

Audioholic Overlord
I would, too. But seeing as how I am paying for these myself, well, you get what you get.

If you or anyone else wants to provide the means to test those (either through donation or loan) then I would be happy to add them to my ongoing testing.

I have reached out to both PE and Monoprice to see if they would be willing to send review samples. Crickets so far.

And, FWIW, when I do contact manufacturers I *never* ask for anything for free to keep. I always specifically ask for review samples. Loaners.
If you like, I can bring a SUB1200. Not sure how important keeping to 10" is, since the SUB1200 at $150 shipped (last time I checked) is probably still below the price of most of the 10" ones.
I also have a JBL 550P and Infinity Reference SUB R10. These are both 10" subs . Their MSRP is in the $450-500 territory, IIRC, but like the Studio 530's they are put on sale at 50% or greater discount on a routine basis, making them available for around $230 or less.
 
ErinH

ErinH

Audioholic General
I'm wide open now. The first round was just for 10-inch woofers because I wanted to batch test a bunch of similarly priced and sized woofers for a direct comparison.

But I have opened the gates. You could even bring the subs and we can test it the same day (assuming the weather is decent enough). Let's stay in touch on this topic. And thanks for volunteering them. :)
 

Latest posts

newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top