The Insanity of Marketing Disguised as Science in Loudspeakers

A

admin

Audioholics Robot
Staff member
This article is an opinion piece on why you simply cannot declare Speaker XXX is better than Speaker YYY based on a few measurement graphs or claims from a manufacturer that their speakers are inherently the best because they use anechoic chambers and DBT protocols during their design and testing phases. We explore many of the misconceptions consumers often fall victim to when viewing loudspeaker measurements or falling too heavily into the DBT mantra. In a small market catering to audio enthusiasts, that seems to be continually shrinking, it's not unreasonable as to why manufacturers often dress up marketing as science. It is important to recognize this and note how a loudspeaker plays into a room and how we ultimately perceive that experience is a far more complex topic than we can fully understand and neatly frame with a few measurements and listening tests (sighted or blind).


Discuss "The Insanity of Marketing Disguised as Science in Loudspeakers" here. Read the article.
 
Last edited by a moderator:
R

randyb

Full Audioholic
This article is an opinion piece on why you simply cannot declare Speaker XXX is better than Speaker YYY based on a few measurement graphs or claims from a manufacturer that their speakers are inherently the best because they use anechoic chambers and DBT protocols during their design and testing phases. We explore many of the misconceptions consumers often fall victim to when viewing loudspeaker measurements or falling too heavily into the DBT mantra. In a small market catering to audio enthusiasts, that seems to be continually shrinking, it's not unreasonable as to why manufacturers often dress up marketing as science. It is important to recognize this and note how a loudspeaker plays into a room and how we ultimately perceive that experience is a far more complex topic than we can fully understand and neatly frame with a few measurements and listening tests (sighted or blind).


Discuss "The Insanity of Marketing Disguised as Science in Loudspeakers" here. Read the article.
I am going to guess Harmon as the company that you contacted about the test. I have a lot of respect for Sean Olive and Toole. I also think that the reasons given probably came from the legal and marketing people. I am just guessng of course. By the way, Scott Wilkinson had a good podcast with Sean and it was funny when marketing people (Harmons own) were some of the worse listeners when it came to trained vs. untrained. Sean also talked about how comparing speakers blind without having them in the same position ruined any correlations that could be drawn. In other words positioning matters when comparing speakers. My own experience suggest that quick switching is also needed to be able to draw any reliable conclusions.
 
krabapple

krabapple

Banned
Some of this is quite silly. For one thing, DBT isn't just a medical research method. For another, DBT is just as necessary in tests of preference as for tests of subtle difference, as shown for example, by some research on relation of perceived cost to wine preference .

Gene seems incredulous at the idea that someone might prefer an objectively worse-measuring loudspeaker to a better one, thanks to appearance (or reputations, or price), but that's just what Olive et al. showed in their series of papers in JAES years ago. And no, they didn't have to remove the crossovers to make their point. That's a ridiculous example and a ridiculous argument. Their point wasn't that one confounder will always be strongest and always determine the choice under any circumstance. Their point was that these confounders have real effects on consumer choice.
Consumers may believe they are *only* picking on an audible basis, but the research shows, that's often not the case.
 
lsiberian

lsiberian

Audioholic Overlord
Tread carefully Gene. You start talking sense and the crazies will come out of their crazy rooms.

Looks matter. Good finishes inspire confidence and improve the listening experience. Why shouldn't we allow sight to influence our perception? Are we supposed to be robots?
 
Last edited by a moderator:
gene

gene

Audioholics Master Chief
Administrator
Some of this is quite silly. For one thing, DBT isn't just a medical research method. For another, DBT is just as necessary in tests of preference as for tests of subtle difference, as shown for example, by some research on relation of perceived cost to wine preference .

Gene seems incredulous at the idea that someone might prefer an objectively worse-measuring loudspeaker to a better one, thanks to appearance (or reputations, or price), but that's just what Olive et al. showed in their series of papers in JAES years ago. And no, they didn't have to remove the crossovers to make their point. That's a ridiculous example and a ridiculous argument. Their point wasn't that one confounder will always be strongest and always determine the choice under any circumstance. Their point was that these confounders have real effects on consumer choice.
Consumers may believe they are *only* picking on an audible basis, but the research shows, that's often not the case.

Really? Show me the "research" b/c the "research" is often discussed,though its never fully disclosed. I love how manufacturers claim "research, research" but they really never SHOW the results, they never show the test setups, the never detail the test biases, they never reveal the products under test or if they were setup properly per manufacturer guidelines, etc.

Blind tests are great. We do them. But let's not fool ourselves into believing:
1. companies are adhering to a strick DBT protocol - most if NOT all, aren't
2. blind tests are infallible
3. blind tests can with > 95% confidence show clear preference for one speaker over another, most test sample sizes are too small to reach statistical significance
4. a single trial can definitively and accurately allow a listener to decide on their preference

This is why I enjoy engaging in both sighted and blind tests and spending a couple of days or weeks conducting individual product testing as well as direct comparisons of products to give myself plenty of time to assess the true performance of each product and ensure I don't experience listening fatigue from the more colored speaker that may initially sound "better" since it stands out more.

I highly recommend reading the following two articles we wrote that discuss the Flaws in Loudspeaker testing both sighted & blind:

Overview of Audio Testing Methodologies — Reviews and News from Audioholics

Revealing Flaws in the Loudspeaker Demo & Double Blind Test — Reviews and News from Audioholics

As for me incredulous implying one would prefer a "bad" measuring to "good" measuring speaker, I again restate that most measurements aren't revealing enough to truly determine which speaker measures "good" and which measures "bad". Measurements can easily be done incorrectly or manipulated to look good/bad as you can see in the article I wrote:

Audio Measurements - The Useful vs the Bogus



Looks matter. Good finishes inspire confidence and improve the listening experience. Why shouldn't we allow sight to influence our perception? Are we supposed to be robots?
Yep and why don't the companies/people that claim "looks bias the results" simply make their speakers more attractive? Do they lack that much confidence on how their products appear? Looks matter in any consumer luxury good so perhaps offer a line of speakers with upgraded cosmetics for a price premium.

Years ago I compared two pairs of speakers in a sighted test. One pair was from a British company that had a much more prestigious name and looked 100 times better than the plain American speaker that was sort of a generic name (certainly not regarded as high end) and the speaker was also cheaper than the prestigious British one. I figured for sure the British speaker would win the comparison. To my surprise, the ugly American speaker won when I compared them myself and had about 8 different people run the same test whom all came to the same conclusions. The pretty, more prestigious, pricier speaker did NOT win! Some of the listeners weren't familiar with brands, others were. It didn't really matter. They all preferred the cheap, generic looking American speaker. I've seen this happen many times since then so pleaseeeeee let's drop the bullsh1t that a pretty and more prestigious speaker will significantly bias the result in its favor. I am sure there is some truth to the "looks matter" but I think its significantly less relevant to the listener experience each person will have. This is especially true if the listener is far enough away from the speakers to not even be able to determine which pair is playing. I totally agree with Sean Olive that positional bias is a huge factor too. This is yet another reason why extended and multiple listening sessions with each speaker is so important.
 
Last edited:
jliedeka

jliedeka

Audioholic General
Floyd Toole described Harman's DBT setup in his book. They had a mechanical system to move the speakers into identical positions. All that was behind an acoustically transparent curtain. A system like that has a better chance to produce a correct evaluation. Extended evaluation is all well and good but our audio memories are too short to be sure about differences.

While agree that FR graphs can be misleading unless you know the measurement protocols, impedance and phase graphs can be pretty revealing and are hard to screw up.

Jim
 
krabapple

krabapple

Banned
GDS,
Your 4-point list of things we have supposedly fooled ourselves into believing, are all straw men. I frankly can't recall anyone ever claiming 1,2,4, and #3 requires careful review of exactly what is being claimed, in what research. (Your anecdote about the 'pretty' speaker losing a listening test is kind of hilarious in regard to #4, since you're expecting me to accept this single, *sighted* test to be significant proof of something.)

I suggest you familiarize yourself with at least the articles I list at the end of this post, before you write on what we know about listening tests and factors influencing preference. They are classics of their type, and all in my library -- are they in yours? They should be (yes, even where the JAES article covers the same ground as the convention paper -- sometimes one includes details the other lacks).

And again, quality perception confounders are not confined to audio. They're a common phenomenon. Appearance, price, brand (to name three factors that are practically universal to merchandise) all influence consumer choice in ways they aren't necessarily conscious of, as advertisers and manufacturers well know. It's not bullshit. (You'd better believe they pay for studies of same, and they usually *don't* publish them.)

As for the articles you've linked to, when they make dubious inferences like "When manufacturers say that their speakers sound equally good to other (usually much more expensive) speakers (the "tie" situation mentioned above), they are basically saying that they've proved the null, " and then provides a list of supposedly ignored factors that Harman, for example, actually HAVE considered, it's hard to take such articles seriously. Or when they claims or imply, as you have again in this article, that blind tests have no use in preference evaluation, which of course they do (ABC/hr has been used extensively in lossy codec evaluation, for example). These claims reveal the the 'truth tellers' haven't actually familiarized themselves with the literature.

Finally, you make a huge deal out of researchers not citing the brands they are comparing (which isn't really unusual for journal-published research), even while making insinuations about unspecified manufacturers biasing their tests. Why don't you just come out and name them?




//JAES papers related to loudspeaker preference

Listening Tests-Turning Opinion into Fact
Author: Toole, Floyd E.
Affiliation: National Research Council, Ottawa, Ont. K1A OR6, Canada
JAES Volume 30 Issue 6 pp. 431-445; June 1982

Subjective Measurements of Loudspeaker Sound Quality and Listener Performance
Author: Toole, Floyd E.
Affiliation: National Research Council, Ottawa, Ontario K1A OR6, Canada
JAES Volume 33 Issue 1/2 pp. 2-32; February 1985

Loudspeaker Measurements and Their Relationship to Listener Preferences: Part 1
Author: Toole, Floyd E.
Affiliation: National Research Council, Ottawa, Ontario KIA OR6, Canada
JAES Volume 34 Issue 4 pp. 227-235; April 1986

Loudspeaker Measurements and Their Relationship to Listener Preferences: Part 2
Author: Toole, Floyd E.
Affiliation: National Research Council, Ottawa, Ont. K1A OR6, Canada+
JAES Volume 34 Issue 5 pp. 323-348; May 1986

Differences in Performance and Preference of Trained versus Untrained Listeners in Loudspeaker Tests: A Case Study
Author: Olive, Sean E.
Affiliation: Research & Development Group, Harman International Industries, Inc., Northridge, CA
JAES Volume 51 Issue 9 pp. 806-825; September 2003


// Convention papers of same

Differences in Performance and Preference of Trained versus Untrained Listeners In Loudspeaker Tests: A Case Study
Author: Olive, Sean E.
Affiliation: Research & Development Group, Harman International Industries, Inc., Northridge, CA
AES Convention:114 (March 2003) Paper Number:5728


A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements: Part I - Listening Test Results
Author: Olive, Sean E.
Affiliation: Harman International Industries, Inc., Northridge, CA
AES Convention:116 (May 2004) Paper Number:6113


A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements: Part II - Development of the Model
Author: Olive, Sean E.
Affiliation: Harman International Industries, Inc., Northridge, CA
AES Convention:117 (October 2004) Paper Number:6190
 
Last edited:
gene

gene

Audioholics Master Chief
Administrator
Floyd Toole described Harman's DBT setup in his book. They had a mechanical system to move the speakers into identical positions. All that was behind an acoustically transparent curtain. A system like that has a better chance to produce a correct evaluation. Extended evaluation is all well and good but our audio memories are too short to be sure about differences.

While agree that FR graphs can be misleading unless you know the measurement protocols, impedance and phase graphs can be pretty revealing and are hard to screw up.

Jim
A speaker that has an obvious flaw like the tweeter level being set too high is easy to remember so don't discount our auditory memory for obvious sonic signature differences. I can vividly describe how a gourmet steak from my favorite restaurant tastes compared to a Denny's steak and when I taste that gourmet steak again at a later date, my senses aren't shocked or surprised b/c they anticipate the stimuli I remember from past experiences. If the differences between speakers are subtle than I agree that controlled blind tests like the ones Harman does produces more consistent results b/c they help eliminate the bias of perception and in Harman's case, they also remove positional bias.

Long term testing is used to determine how you feel about the speaker after experiencing it over a length of time with your own program material and in your own listening space. I still tend to place more weight on this than short term instantaneous comparison testing. How many of you have test drove a car, loved it at the time you test drove it only to be disappointed a few months later? Perhaps this happened b/c you noticed its a bit bouncier or not as fast at accelerating under certain driving conditions. I've experienced buyers remorse many times when purchasing cars.

Harman probably has the best testing facility in the country to do comparative blind testing. However, by rotating speakers on a platform, an instantaneous switching between speakers isn't possible so auditory memory which helps detect subtle differences will be impaired. The question is what bias is worse - reduced auditory memory or positional bias? I'd be curious to hear Sean's opinion on that.

Personally, I like to be able to engage long term individual testing and instantaneous switch testing of speakers to get a good baseline of performance.

Impedance graphs can certainly reveal design flaws and help explain why a speaker can sound different when being driven by different amplifiers, but they don't directly dictate perceived sound quality.
 
gene

gene

Audioholics Master Chief
Administrator
GDS,
Your 4-point list of things we have supposedly fooled ourselves into believing, are all straw men. I frankly can't recall anyone ever claiming 1,2,4, and #3 requires careful review of exactly what is being claimed, in what research. (Your anecdote about the 'pretty' speaker losing a listening test is kind of hilarious in regard to #4, since you're expecting me to accept this single, *sighted* test to be significant proof of something.)

I suggest you familiarize yourself with at least the articles I list at the end of this post, before you write on what we know about listening tests and factors influencing preference. They are classics of their type, and all in my library -- are they in yours? They should be (yes, even where the JAES article covers the same ground as the convention paper -- sometimes one includes details the other lacks).

And again, quality perception confounders are not confined to audio. They're a common phenomenon. Appearance, price, brand (to name three factors that are practically universal to merchandise) all influence consumer choice in ways they aren't necessarily conscious of, as advertisers and manufacturers well know. (You'd better believe they pay for studies of same, and they usually *don't* publish them.)

As for the articles you've linked to, when they make dubious inferences like "When manufacturers say that their speakers sound equally good to other (usually much more expensive) speakers (the "tie" situation mentioned above), they are basically saying that they've proved the null, " and then provides a list of supposedly ignored factors that Harman, for example, actually HAVE considered, it's hard to take such articles seriously. Or when they claims or imply, as you have again in this article, that blind tests have no use in preference evaluation, which of course they do (ABC/hr has been used extensively in lossy codec evaluation, for example). These claims reveal the the 'truth tellers' haven't actually familiarized themselves with the literature.

Finally, you make a huge deal out of researchers not citing the brands they are comparing (which isn't really unusual for journal-published research), even while making insinuations about unspecified manufacturers biasing their tests. Why don't you just come out and name them?




//JAES papers related to loudspeaker preference

Listening Tests-Turning Opinion into Fact
Author: Toole, Floyd E.
Affiliation: National Research Council, Ottawa, Ont. K1A OR6, Canada
JAES Volume 30 Issue 6 pp. 431-445; June 1982

Subjective Measurements of Loudspeaker Sound Quality and Listener Performance
Author: Toole, Floyd E.
Affiliation: National Research Council, Ottawa, Ontario K1A OR6, Canada
JAES Volume 33 Issue 1/2 pp. 2-32; February 1985

Loudspeaker Measurements and Their Relationship to Listener Preferences: Part 1
Author: Toole, Floyd E.
Affiliation: National Research Council, Ottawa, Ontario KIA OR6, Canada
JAES Volume 34 Issue 4 pp. 227-235; April 1986

Loudspeaker Measurements and Their Relationship to Listener Preferences: Part 2
Author: Toole, Floyd E.
Affiliation: National Research Council, Ottawa, Ont. K1A OR6, Canada+
JAES Volume 34 Issue 5 pp. 323-348; May 1986

Differences in Performance and Preference of Trained versus Untrained Listeners in Loudspeaker Tests: A Case Study
Author: Olive, Sean E.
Affiliation: Research & Development Group, Harman International Industries, Inc., Northridge, CA
JAES Volume 51 Issue 9 pp. 806-825; September 2003


// Convention papers of same

Differences in Performance and Preference of Trained versus Untrained Listeners In Loudspeaker Tests: A Case Study
Author: Olive, Sean E.
Affiliation: Research & Development Group, Harman International Industries, Inc., Northridge, CA
AES Convention:114 (March 2003) Paper Number:5728


A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements: Part I - Listening Test Results
Author: Olive, Sean E.
Affiliation: Harman International Industries, Inc., Northridge, CA
AES Convention:116 (May 2004) Paper Number:6113


A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements: Part II - Development of the Model
Author: Olive, Sean E.
Affiliation: Harman International Industries, Inc., Northridge, CA
AES Convention:117 (October 2004) Paper Number:6190
thanks for the references. I've read some of them and I still argue the following points:

  • you assume the listener is aware of product price or brand name prestige. I've had many listeners engaged in sighted and blind tests at my place that were clueless about such things so that bias is very dependent on listener knowledge
  • none of these papers ever address familiarity bias. This is often overlooked by the companies running the tests using their own panel of listeners.
  • the companies that claim their speakers are "similarly good" to the most expensive and prestigious brands NEVER allow 3rd party verification of their claims. They never indicate what they have tested so that is a highly suspicious claim. Let's say for the moment that it was true, that means that particular company can NEVER produce a better speaker, so why ever change or improve the design?
  • How can I know which manufacturers are biasing their tests since they don't allow independent verification of their results?
  • In my experience, the companies that make such claims usually don't want 3rd party verification or shootouts done.
  • Are we really expected to believe companies go out and buy $200k/pair speakers from several manufacturers to compare to their $1500/pair model to determine they are "similarly good"? I can tell you first hand a company as vast as Harman doesn't have the budget to do this. They typically purchase a few popular brands at similar price points to their products to run comparative tests.

Please stop putting words in my mouth. I never said blind tests have no use in preference evaluations. I simply question the validity of the tests and claims some manufacturers engage in and how much weight they place on perceptual bias over familiarity bias and other forms of bias they often don't disclose from their tests.

This industry lacks standards for measuring loudspeakers, especially in terms of audible distortion and how to measure it. I suggest you read the following AES paper: Measurements and Perception of Nonlinear Distortion – Comparing Numbers and Sound Quality by Alex Voishvillo where this is discussed in great detail.

I am fully aware of the work Harman and others are doing at attempting to come up with a measurement standard and believe me I am all for that.

We've got subwoofers nailed down pretty good but loudspeakers are a much more complex animal. On that topic, isn't it funny folks rarely insist we test subwoofers blind? :rolleyes: I think some of this has to do with the fact we are more successful at quantifying measurable results to audible preferences in subwoofers than we are with wide bandwidth loudspeakers.
 
Last edited:
MinusTheBear

MinusTheBear

Audioholic Ninja
Good article. It brings up a point that is never brought up about controlled listening tests and that is disclosing bias. Especially familiarity bias. That is one of the most important biases to control on the listening panel. Anytime you have employees that engineer the product or employees that use the companies products in their home, getting a paycheck and are conducting "double-blind" against the competition, this is a HUGE bias and is really counterproductive to the "science" behind such controlled listening tests.
 
Irvrobinson

Irvrobinson

Audioholic Spartan
Tongue in cheek, obviously, but I think there should a rule against trusting audio DBTs until you've participated in one, and judged the relative value of the choices that made up your contributions to the conclusion.

Unless the differences are overt the process is tedious and a lot of guessing is involved.
 
TICA

TICA

Audioholics Accounts Manager
Testing

Most of these arguments of the DBT forget some of the important rules, true randomization, impartiality from both sides, a high sample size (at least 100 or higher, preferably in the thousands) unbiased controls, and most important of all.. the ability to reproduce the results independently for many, many times using the same data.

If you can accomplish most of this without biasing the data and misinterpreting the results, then you're getting on the right path. Even after all of the unbiased information in obtained, you CANNOT ever assume that you arrived at the correct conclusion and that CAUSE and EFFECT are related to the observations.

As quoted in the American Medical Association, all these research, theories need to be Peer Reviewed. In their words peer reviewed means "a process of self-regulation by a profession or a process of evaluation involving qualified individuals within the relevant field. Peer review methods are employed to maintain standards, improve performance and provide credibility. In academia peer review is often used to determine an academic paper's suitability for publication." In lieu of this, it should also be noted that any research, especially medical research is highly regulated. Aside from CEDIA, CEA and the Electrical Engineering Association, not many "regulatory" standards exist in the testing and performing these "trials." If you can show proof of this, it would make it so much more credible and easy to contest.
 
R

randyb

Full Audioholic
Really? Show me the "research" b/c the "research" is often discussed,though its never fully disclosed. I love how manufacturers claim "research, research" but they really never SHOW the results, they never show the test setups, the never detail the test biases, they never reveal the products under test or if they were setup properly per manufacturer guidelines, etc.

Blind tests are great. We do them. But let's not fool ourselves into believing:
1. companies are adhering to a strick DBT protocol - most if NOT all, aren't
2. blind tests are infallible
3. blind tests can with > 95% confidence show clear preference for one speaker over another, most test sample sizes are too small to reach statistical significance
4. a single trial can definitively and accurately allow a listener to decide on their preference

This is why I enjoy engaging in both sighted and blind tests and spending a couple of days or weeks conducting individual product testing as well as direct comparisons of products to give myself plenty of time to assess the true performance of each product and ensure I don't experience listening fatigue from the more colored speaker that may initially sound "better" since it stands out more.

I highly recommend reading the following two articles we wrote that discuss the Flaws in Loudspeaker testing both sighted & blind:

Overview of Audio Testing Methodologies — Reviews and News from Audioholics

Revealing Flaws in the Loudspeaker Demo & Double Blind Test — Reviews and News from Audioholics

As for me incredulous implying one would prefer a "bad" measuring to "good" measuring speaker, I again restate that most measurements aren't revealing enough to truly determine which speaker measures "good" and which measures "bad". Measurements can easily be done incorrectly or manipulated to look good/bad as you can see in the article I wrote:

Audio Measurements - The Useful vs the Bogus





Yep and why don't the companies/people that claim "looks bias the results" simply make their speakers more attractive? Do they lack that much confidence on how their products appear? Looks matter in any consumer luxury good so perhaps offer a line of speakers with upgraded cosmetics for a price premium.

Years ago I compared two pairs of speakers in a sighted test. One pair was from a British company that had a much more prestigious name and looked 100 times better than the plain American speaker that was sort of a generic name (certainly not regarded as high end) and the speaker was also cheaper than the prestigious British one. I figured for sure the British speaker would win the comparison. To my surprise, the ugly American speaker won when I compared them myself and had about 8 different people run the same test whom all came to the same conclusions. The pretty, more prestigious, pricier speaker did NOT win! Some of the listeners weren't familiar with brands, others were. It didn't really matter. They all preferred the cheap, generic looking American speaker. I've seen this happen many times since then so pleaseeeeee let's drop the bullsh1t that a pretty and more prestigious speaker will significantly bias the result in its favor. I am sure there is some truth to the "looks matter" but I think its significantly less relevant to the listener experience each person will have. This is especially true if the listener is far enough away from the speakers to not even be able to determine which pair is playing. I totally agree with Sean Olive that positional bias is a huge factor too. This is yet another reason why extended and multiple listening sessions with each speaker is so important.
Gene,

Very interesting thread. Gene, here is the deal. I would only really trust Don Keele and his measurements. I have corresponded with lots of people in the industry. He stands out as the most honest and the most technically proficient in speaker measurements. Just my opinion from exchanges with people in the industry. As Atkinson said (and I hate to quote him) "Don wrote the book" for us old guys. The point I guess is that I am a skeptic of you and the rest of the industry. You sell speakers and that is a bias that I cannot just ignore. When I get this "you can't trust measurements" (unless they are mine), and you can't trust blind or DBT tests (and the person writing sells speakers), what would you think or should this be let's show dirty laundry (and who is the most bias)?

In my industry, you cannot even HAVE the APPEARANCE of conflict of interest. You can say you mentioned no brands, but that doesn't really work for me.

P.S. Please don't tell me that "Don sells speakers too" because his array is as much a concept as it is in up and running commecial success.


P.S.S. Your article was fine as an OPINION, but your replies to Krapapple who actually IS a scientist are wanting.
 
Last edited:
3db

3db

Audioholic Slumlord
What I see in the audio industry as a whole is a lack of standardization in testing methodolgy. I would like to think that some of the more serious manufacturers adhere to sound (excuse the pun) testing principles,processes, and procedures that are repeatable.

The problem with the obejectivists has been the crap rammed down our throats by the subjectivests and their golden ears and megadollar wallets. We grow tired of unsubstantiated claims by the "audio bunny" reviewers who's subjective fluff is taken as gospel. Go to the Hoffman forum or the Audioasylum forum where people are raving about how they can detect a .01db rise in a single frequency while listening to an orchestra play. :rolleyes: Throw in the cables and interconnect debate, album demagetizing opening the soundtsage 10 fold, better power cables, and audio fuses/receptacles.. Pure unuadultrated crap in my books.

Its a well written article Gene and I understand what you are saying. Take the measurements with a grain of salt when comparing between/among products and don't forget to listen to them. The products could be better than what was speced out by the manufacturer or the audio bunny reviewers.
The problem is, who does one believe? :confused:
 
I think where communication breaks down is in the assumption process. What is being written and communicated has several layers underneath it that aren't understood by the party being communicated to. For example:

1) Gene is talking primarily about speakers that have significant differences - in that the audio quality will override the aesthetics to (and this is important) "people who care about audio quality more".

2) A "true" (whatever that means) DBT precludes the involved parties even seeing the speakers, so there is no aesthetic bias

3) "Different" does not equal "better" or "worse" which complicates things to the untrained ear

4) Ear memory is great for significant differences, and is not implied to be true for subtle differences (thus the Denny's steak example)

and 5) The principle of "You should understand what I mean" and "I will form my opinions based on what you actually wrote" seem to be playing a major part in the discussion breakdown here... :)
 
AcuDefTechGuy

AcuDefTechGuy

Audioholic Jedi
Why shouldn't we allow sight to influence our perception? Are we supposed to be robots?
LOL.:D

Love that statement.:D

The "whole" experience, right?

The whole package.

Emotion. Soul. Senses. Very essential to our very human existence.

I ain't no bad robot.:eek:

I see no harm with all the DBTs, measurements, and opinions.

This is a hobby.

It is not medical. Not a matter of life and death.

Bottom line is how the speakers sound to me in my own room over time.
 
Last edited:
jinjuku

jinjuku

Moderator
Tread carefully Gene. You start talking sense and the crazies will come out of their crazy rooms.

Looks matter. Good finishes inspire confidence and improve the listening experience. Why shouldn't we allow sight to influence our perception? Are we supposed to be robots?
We aren't talking primarily about looks though. FACT: You don't need to see a speaker to judge it's performance any more than price or the cabinet materials it is made of. This is assuming said speaker is setup properly. Different speakers will require different placement and even have room layout and dimension preferences.

I don't see a reason why a SBT test isn't rigorous enough. What does it matter the person switching speakers if they know. They aren't under evaluation.
 
cpp

cpp

Audioholic Ninja
Marketing is saying the right things to the right person. You might think of marketing this way. If business is all about people and money and the art of persuading one to part from the other, then marketing is all about finding the right people to persuade. People don't just "buy" a product. They "buy" the concept of what that product will do for them.

One can market plain donuts into something that is "over the top", "Must have" the "next best thing" but in reality it's still a plain donut, but when it comes to audio equipment marketing and those that support these NEW audio devices by embellishing their thoughts with selected tests made in their "room" using equipment and special secret room treatments that normal users could never afford or be allowed to do, so now that plain donut now with icing takes on new tastes, so people try it, or buy it.

Until the buyer gets that NEW piece of audio equipment into their room, using their supporting equipment and of course their ears and their music will the person actually know if that new piece of audio equipment work in their room, regardless what was read in an article. People need to understand the article are starting points towards your research before you purchase, they are not gospel to all that read the article.

When you read an article it's actually marketing in support of equipment that someone else likes or thinks you should like or even to dislike depending on what is written in their article.

Mark Twain wrote, "You cannot depend on your eyes when your imagination is out of focus". I would note that "You cannot depend on your eyes OR EARS when your imagination is out of focus".
 
jinjuku

jinjuku

Moderator
This industry lacks standards for measuring loudspeakers, especially in terms of audible distortion and how to measure it. I suggest you read the following AES paper: Measurements and Perception of Nonlinear Distortion – Comparing Numbers and Sound Quality by Alex Voishvillo where this is discussed in great detail.
What is always needed, and I believe provided here, is a layman's method for the new and seasoned person alike to successfully evaluate speakers.

One thing I suggest and see others suggest: Evaluate in your home. It's an imperfect industry with no real agreed upon method. Sometimes you just have to do the best you can as the consumer.

If my neighbor wanted to compare DAC's I would sure help him do it blind and take reasonable measures to make sure it's apple to apple.

A difference that takes 3 months to figure out isn't much of a difference at all IMO. Nothing I would willingly spend an inordinate amount of money on to achieve.
 
AcuDefTechGuy

AcuDefTechGuy

Audioholic Jedi
Tongue in cheek, obviously, but I think there should a rule against trusting audio DBTs until you've participated in one, and judged the relative value of the choices that made up your contributions to the conclusion.

Unless the differences are overt the process is tedious and a lot of guessing is involved.
My Salon2 sound freaking awesome, though. How about yours?:D
 

Latest posts

newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top