aggregating some data on a few budget subs

rojo

rojo

Audioholic Samurai
I was just aggregating CEA-2010 data from various sources and making a spreadsheet. Thought you guys might be interested to see it. I've got the spreadsheet attached to this post.



The Dayton SUB-1500 and both BIC subs are actually pretty impressive for their cost. Although the SUB-1500 was the sound quality top pick for one member of the review team, the similarly performing BIC V1220 was subjectively not as impressive according to this Wirecutter article:

Brent Butterworth said:
We hoped the BIC America V1220 might be a nice step up from our alternative pick, the BIC America V1020. However, in our listening tests we found the V1220 had impressive power in the deepest notes but lacked mid-bass punch and didn’t blend as smoothly as the V1020 did with the speakers we used.
In any case, there's enough overhead there to sculpt something reasonably nice sounding with a DSP and a little patience. I was also surprised to see the RSL 10" outperform the Dayton 15" by that big a margin.

Here are a few more comparisons for your enjoyment:



Spreadsheet data sources:
To get the numeric values from the Google Docs graph I had to scrape some JSON data from the web page source, deserialize it using my socially-awkward super geek scripting powers, then subtract 9 dB from all values to convert 1m peak to 2m RMS. Full credit should go to Brent Butterworth for his painstaking measurements, and to The Wirecutter for commissioning his efforts. I didn't bother adding most of the subs to my spreadsheet -- only those I found interesting. In case you want to add others from that Google Docs graph, here are the raw data represented as 2m RMS values:

BIC V1020
80 Hz: 109.4
63 Hz: 107.1
50 Hz: 105.3
40 Hz: 101.8
31.5 Hz: 95
25 Hz: 81.3
20 Hz: 70.3

BIC V1220
80 Hz: 109.7
63 Hz: 108.7
50 Hz: 107.1
40 Hz: 100.2
31.5 Hz: 98.7
25 Hz: 92.5
20 Hz: 85.9

Dayton Audio SUB-1000L
80 Hz: 105.7
63 Hz: 102.9
50 Hz: 99.3
40 Hz: 95.9
31.5 Hz: 91.9
25 Hz: 87.8
20 Hz: 80.9

Dayton Audio SUB-1500
80 Hz: 109.1
63 Hz: 108.3
50 Hz: 106.4
40 Hz: 101.6
31.5 Hz: 97.4
25 Hz: 94.2
20 Hz: 85.9

Monoprice 9723
80 Hz: 112.6
63 Hz: 110.6
50 Hz: 106.6
40 Hz: 100.9
31.5 Hz: 95.3
25 Hz: 89.2
20 Hz: 81.3

Monoprice 14567
80 Hz: 108.8
63 Hz: 107.7
50 Hz: 101.8
40 Hz: 95.3
31.5 Hz: 88.8
25 Hz: 66.3
20 Hz: 48.3

Monoprice 605999
80 Hz: 112.2
63 Hz: 110.4
50 Hz: 106.9
40 Hz: 102
31.5 Hz: 94.2
25 Hz: 81.6
20 Hz: 70.3

Onkyo SKW204
80 Hz: 112.4
63 Hz: 110.6
50 Hz: 105
40 Hz: 99.7
31.5 Hz: 94.3
25 Hz: 81.1
20 Hz: 68.3

Pioneer SW-8MK2
80 Hz: 105.6
63 Hz: 104.4
50 Hz: 102.2
40 Hz: 97.5
31.5 Hz: 93.1
25 Hz: 77.5
20 Hz: 64.6

Polk PSW110
80 Hz: 109.5
63 Hz: 108.9
50 Hz: 105.3
40 Hz: 101.3
31.5 Hz: 95.2
25 Hz: 78.1
20 Hz: 68.6

Polk PSW111
80 Hz: 107
63 Hz: 106.1
50 Hz: 102
40 Hz: 97.5
31.5 Hz: 82.9
25 Hz: 67.1
20 Hz: 63.6

Yamaha YST-SW012
80 Hz: 107.2
63 Hz: 100.9
50 Hz: 96.5
40 Hz: 89.7
31.5 Hz: 82.5
25 Hz: 71
20 Hz: 53

Yamaha YST-SW215
80 Hz: 109.5
63 Hz: 104.5
50 Hz: 97.8
40 Hz: 90.7
31.5 Hz: 83.6
25 Hz: 76.7
20 Hz: 65.5

Excel spreadsheet attached below. To add / remove subs for comparison, right-click the graph and choose "Select Data Source". Check / uncheck mark as desired. If you don't have Excel, then LibreOffice Calc will also open it, but you'll have to re-make the graph.
 

Attachments

Last edited:
S

shadyJ

Speaker of the House
Staff member
Hey Rojo, one thing to keep in mind is that while in theory CEA-2010 should have led to comparable results from different testing, in practice that just has not been the case, at least for any close comparison. I would say if you want to do this type of comparison, you might want to keep different tester's results separate.
 
rojo

rojo

Audioholic Samurai
Hey Rojo, one thing to keep in mind is that while in theory CEA-2010 should have led to comparable results from different testing, in practice that just has not been the case, at least for any close comparison. I would say if you want to do this type of comparison, you might want to keep different tester's results separate.
How much could values differ? Even at different humidity, temperature, elevation, etc, I didn't think there'd be more than a dB or two difference. Isn't that the point of CEA-2010, to provide a set of guidelines to attempt to standardize subwoofer measurements within a reasonable margin of error?
 
S

shadyJ

Speaker of the House
Staff member
How much could values differ? Even at different humidity, temperature, elevation, etc, I didn't think there'd be more than a dB or two difference. Isn't that the point of CEA-2010, to provide a set of guidelines to attempt to standardize subwoofer measurements within a reasonable margin of error?
Yes, that is the point of CEA-2010, but it isn't stringent enough. Different software captures the bursts in different ways. Sound interfaces, microphones, also have an effect. Ambient noise also has an effect. The orientation of the subwoofers, distance from nearest large structure can affect the results. Thermal compression also affects these tests, the warmer the coil is, the less output you will get. If you want to see some differences, look at the measurements of the Hsu VTF15h mk1. There are 4 different tests of that, Paul Apollonio, Josh Ricci, Brent Butterworth, and and Hsu's own measurements. Compare my results to that of Brent Butterworth's: we have both measured the Outlaw Ultra-X13 and SVS SB16. There are many more such examples. I was discussing CEA-2010 with the designer of the Ultra-X13, and we arranged for some CEA-2010 tests using different software. We have CEA-2010 results using Clio, REW, and Keele's software for Igor Pro: different results for all software, even though testing was done on the same day, same equipment, same placement, same location.

Once you understand how the results can differ and the causes of the differences, you can get an idea of how these subwoofers perform even within different tester's data sets. I have seen other attempts at listing a bunch of CEA-2010 results on spreadsheets with the mistaken assumption that the results are all the absolute performance metrics of these subwoofers. If you want to really understand how these cubs compare, segregate your spreadsheets by tester. Yeah, it's less comparable data, but at least its not misleading.
 
KEW

KEW

Audioholic Overlord
So why not publicize which software and gear results in the most favorable results?
I'm betting that will become the defacto standard - manufacturers will use it and reviewers don't want to piss off manufacturers so they will use it, and that point any other labs might as well to be consistent. Then, we have a standard!
 
S

shadyJ

Speaker of the House
Staff member
So why not publicize which software and gear results in the most favorable results?
I'm betting that will become the defacto standard - manufacturers will use it and reviewers don't want to piss off manufacturers so they will use it, and that point any other labs might as well to be consistent. Then, we have a standard!
There are a couple problems with this. First of all, CEA-2010 is a measurement that relatively few people care about. Manufacturers don't really care, and most publications don't really care, except for a few that stress objective performance such as Audioholics. Manufacturers are aware of the differences that can occur, but since it isn't thought to be super-important, there won't be any movement on a more rigorous standard. One reason for this is because CEA-2010 is already a bit confusing to consumers. Trying to explain it and the reason for the differences between testers is bound to cause even more confusion than it will resolve.

As for favorable results, we should be more interested in accuracy over simple larger numbers. Its not just gear and software either, it's also testing conditions. Here is what I will say, my measurements are pretty close to Ricci's, Hsu's, and SVS's. We are all using the same software, Don Keele's program written or Igor. I also use REW too, but I don't publish those numbers for my reviews, just Keele's program to keep my results as close to Ricci's as possible. Ricci is testing in a very good location, I am testing in a pretty good location, Hsu is testing in a not-so-good location, but they are in Anaheim, and there probably isn't a really good spot anywhere near them. I am not sure of SVS's testing conditions or equipment, but I can say that my numbers have been closer to SVS's than anyone else, sometimes exact same numbers (they don't make their numbers public).

One more thing, just because our results might not agree doesn't mean someone is wrong (unless you deliberately fudge the numbers). It is simply what that equipment measured under those conditions with that specific method. It would be wrong if we weren't able to repeat those results with that equipment under those conditions, but that probably isn't the case. So don't look at someone's tests and assume they are incorrect results because they do not match something someone else did.
 
rojo

rojo

Audioholic Samurai
Yes, that is the point of CEA-2010, but it isn't stringent enough. Different software captures the bursts in different ways. Sound interfaces, microphones, also have an effect. Ambient noise also has an effect. The orientation of the subwoofers, distance from nearest large structure can affect the results. Thermal compression also affects these tests, the warmer the coil is, the less output you will get. If you want to see some differences, look at the measurements of the Hsu VTF15h mk1. There are 4 different tests of that, Paul Apollonio, Josh Ricci, Brent Butterworth, and and Hsu's own measurements. Compare my results to that of Brent Butterworth's: we have both measured the Outlaw Ultra-X13 and SVS SB16. There are many more such examples. I was discussing CEA-2010 with the designer of the Ultra-X13, and we arranged for some CEA-2010 tests using different software. We have CEA-2010 results using Clio, REW, and Keele's software for Igor Pro: different results for all software, even though testing was done on the same day, same equipment, same placement, same location.

Once you understand how the results can differ and the causes of the differences, you can get an idea of how these subwoofers perform even within different tester's data sets. I have seen other attempts at listing a bunch of CEA-2010 results on spreadsheets with the mistaken assumption that the results are all the absolute performance metrics of these subwoofers. If you want to really understand how these cubs compare, segregate your spreadsheets by tester. Yeah, it's less comparable data, but at least its not misleading.
As you suggested, I compared your measurements of the Ultra-X13 with Butterworth's. Except for 16Hz, all your values are within 1.6 dB of each other. That's close enough for government work.


Code:
           Butterworth Larson
80 Hz      114.6       114.3
63 Hz      114.7       114.1
50 Hz      114.2       113.7
40 Hz      113.3       113.1
31.5 Hz    113         111.7
25 Hz      110.5       108.9
20 Hz      105.1       105.8
16 Hz      97.6        102.3
Your measurements of the SB16 versus Butterworth's does a better job supporting your point though, with a 3.8 dB difference at 31.5 Hz.


Code:
           Butterworth Larson
80 Hz      115.6       116.0
63 Hz      114.4       116.5
50 Hz      114.0       116.2
40 Hz      114.5*      115.2
31.5 Hz    109.3       113.1
25 Hz      104.0       106.4
20 Hz      99.1        100.1
16 Hz      94.5        94.7
* Note: Butterworth's value at 40 Hz here could be 3dB off. His 1m peak and 2m RMS values differ by only 6dB in his published table.

The VTF-15H Mk1 tests you cited also support your point, with reservations. Ricci and Hsu have remarkably similar results, with only one data point showing a max difference of 1.1 dB. Apollino matches the same curve, but several dB lower. Butterworth's curve appeared to be struggling against the developing CEA-2010 standard. But judging from all the explanations, revisions, and qualifications surrounding the data in these reviews, skepticism is justifiable. It seems that the VTF-15H MK1 discrepancy in this example is more an artifact of the infancy of the standard.


Go home subwoofer. You're drunk.
Code:
           Ricci    Butterworth    Apollino    Hsu
80 Hz      114.6                               114
63 Hz      114.2    113.7          111.3       113.9
50 Hz      113.7    112.2          111         113.7
40 Hz      113.5    111            111         113.3
31.5 Hz    111.3    112.6          109         110.2
25 Hz      106.9    107.7          105.4       106
20 Hz      104.1    105.8          102.6       104.4
16 Hz      100.6                               100.3
In any case, I see your point, that the data in my spreadsheet should be interpreted with reservations, and that it's useful to group the measurements by tester.
 
newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top