Yes, that is the point of CEA-2010, but it isn't stringent enough. Different software captures the bursts in different ways. Sound interfaces, microphones, also have an effect. Ambient noise also has an effect. The orientation of the subwoofers, distance from nearest large structure can affect the results. Thermal compression also affects these tests, the warmer the coil is, the less output you will get. If you want to see some differences, look at the measurements of the Hsu VTF15h mk1. There are 4 different tests of that, Paul Apollonio, Josh Ricci, Brent Butterworth, and and Hsu's own measurements. Compare my results to that of Brent Butterworth's: we have both measured the Outlaw Ultra-X13 and SVS SB16. There are many more such examples. I was discussing CEA-2010 with the designer of the Ultra-X13, and we arranged for some CEA-2010 tests using different software. We have CEA-2010 results using Clio, REW, and Keele's software for Igor Pro: different results for all software, even though testing was done on the same day, same equipment, same placement, same location.
Once you understand how the results can differ and the causes of the differences, you can get an idea of how these subwoofers perform even within different tester's data sets. I have seen other attempts at listing a bunch of CEA-2010 results on spreadsheets with the mistaken assumption that the results are all the absolute performance metrics of these subwoofers. If you want to really understand how these cubs compare, segregate your spreadsheets by tester. Yeah, it's less comparable data, but at least its not misleading.