I will say Harman is probably one of the few companies that don't suffer from "familiarity bias" in their test results. Yes I have read their test results and I still feel their sample size (alternative brands / models) are too small and limited to come to some of the conclusions they have come to so definitively. Harman only directly compares their products to whom they feel are their mainstream direct competitors (ie. B&W, Martin Logan,Klipsch, etc). There are over 400 brands in the US market alone, some of whom don't have the marketshare and thus fall off Harman's radar. There are a lot of good companies doing good work making expensive but excellent product. Thus I disagree with the notion beyond a certain $$$ only cosmetic improvements are achieved.
The implication of Harman's work for the 'untested' brands is simply that , if compared
fairly on the basis of sound alone, ones that 'measure well' in certain parameters will
tend to be preferred over ones that don't, by listeners both 'trained' and 'untrained'.
Happily for loudspeaker manufacturers, 99.9% of listeners won't ever be able to fairly compare loudspeakers by listening alone.
Happily too for them, many times the relevant measurements are not available to the consumer. So the consumer is left with the highly superstitious method of simply 'auditioning' loudspeakers in demo rooms or in their own rooms, where the listener could well conclude that a loudspeaker 'sounds bad', when the problem is the room and the positioning, not the product; or they might luck out and find a crummy speaker that happens to sound good in that crummy room.
I don't think we can define such an amount when our ability to make subjective comparisons and conclusions are still quite limited. Most listening tests are done at low power levels where speakers generally behave well. What happens at higher levels in large rooms? Clean output and extension costs $$$s b/c they involve multi driver arrays, more powerful motors, better high temp crossover parts, etc. $1500/pair is NOT the magic # for achieving this (not saying Harman says this, but some brands that preach DBT as religion do).
And I ask again: what loudspeaker brands preach 'DBT religion'? Harman is well-known and matter of fact in its devotion to measurements and DBTs in its product development. I know Pioneer has published audio research that used scientific protocols. Other than that, I draw a blank. No doubt there are loudspeaker brands that tout bench (anaechoic, quais-anaechoic) measurements, but that's a different thing. DBTs of various kinds (there is more than one kind of 'DBT') are common in preference research in product development outside of audio where sensory perception is involved (e.g., 'blind taste tests'). They can be abused but I don't see much evidence so far that's happening in the loudspeaker realm.
Upon closer inspection of some tests I've seen, they often seem more like a test of preference towards loudspeakers with more bass extension than overall fidelity. In a direct instantaneous comparison, I've personally found listeners gravitate towards the speaker with most boom and sizzle. This is why I maintain long term and frequent listening tests of products on an individual basis is necessary. If Speaker A has more bass than Speaker B, but Speaker B produces better mids/highs and can play at higher SPL more cleanly, than it may be a good idea to bass manage both speakers to a level matched sub and run separate listening tests to see how listener preference changes. Nobody does this to my knowledge.
Please point me to those tests you've seen.
IMO we are NOT at the stage (perhaps we never will be) where we can just look at a graph, flip a switch for a quick test and determine causation of measured results vs subjective preferences. Sorry, there are too many variables to control, some of which we still don't have a firm grasp of. Harman is probably closer to this than anyone, but its still not an exact science IMO.
Please stop erecting straw men to knock down. No DBT user I know of -- either proffessional like at Harman or amateur -- has been talking about 'exact science' (whatever that is...all the science I know of has degrees of confidence attached to its results.)
The overall thrust of Toole's/Olive's/Harman's results is that measurements that we can do right now, *do* predict listener preference to a
good degree of confidence -- *if* the listener is assessing
only the sound. Of course a good degree of confidence is not an
exact degree -- it's not going to tell you correctly, every time, what a particular listener is going to prefer, even in a blind comparison. The work reveals
preference trends and relates them to measurements. Don't damn them for being something they never intended to be: perfect predictors of subjective preference.
If your overall message is, don't believe everything you read in audio marketing literature, well, duh. I think there are much bigger fish to be fried there in other parts of audiophilia, than the tiny school of mfrs that cite DBT-based results. I can't fathom why they've stuck in your craw.
Finally, why is Philip Bamberg given privileged commentary in the article, versus any number of other people who might have expertise in the areas you are discussing in it?