Can you hear a difference in Sound between Audio Amplifiers?

Do Amplifiers Sound Different?

  • Yes

    Votes: 103 60.2%
  • No

    Votes: 52 30.4%
  • crikets crickets....What?

    Votes: 16 9.4%

  • Total voters
    171
cpp

cpp

Audioholic Ninja
Agree, as long as the person in charge of setting up and conducting such tests are unbiased, with no preconceptions whatsoever. I wonder who the person would be?:D If we are talking about credibility, perhaps get someone from the outside, someone who is experienced in such tests but have no experience/opinions in hifi equipment. Not me though.:D
There lies the kicker.. any "remarks" by the tester that could influence item 'A' or 'B' or what he/she likes well just screw up the whole test before it even starts... Just handle it like the Coke-Pepsi challenge where the can in our case the amps are not even identifiable and just marked with a letter 'A' or 'B'.

Or just bring the stuff to my home, I have a room off the HT room where my amps, etc are located. All you see are speakers and the screen... I say if you can't see the item being tested you can't have any perceived notion to base a decision on. Rather simple.
 
P

PENG

Audioholic Slumlord
If the trial requires that medical staff see the patients on a regular basis for a longer period of time, over several weeks or longer, then there might be a good reason for a DBT over a SBT. I can't see how this would apply to listening tests of amplifiers or speakers, which will take place in one day. Single blind testing should be good enough.
Agree, except in this case I am concerned that if the person (say Gene) is conducting the tests and he is in the room. I would be less concerned if someone else who is known to have no opinion on this topic and is only there to do the switching, volume matching etc. Obviously the person need to have some basic knowledge. My guess is that Gene is not unbiased, though I mean this in a good and respectful way. I mean he has been in this business forever right..

Once have an idea about that, then you need to look up statistics books and read all about what 95% confidence intervals mean :D. Then we can talk about how many trial repetitions are needed to make valid conclusions.
Of course, and I Ibelieve all degreed EE, MBA or not, has to take a course in stats anyway so I am confident Gene knows his stats. If need be, I can donate my foundamentals in statistics that I know is still in the basement.:D

I could go on and on about this (I have probably lost readers by now :(). But I want to make it clear that I agree with Gene that simple sighted tests are appealingly simple, and easy to do when differences are easy to hear. But if you want to convince the doubters in this world, you better pay attention to blind testing and statistical significance.
Appealing and also convincing (to me and you at least I assume) as it surely is, I am still urging Gene to at least turn it into a SBT, as CPP suggested, it is not hard to do. It just remove one doubt in such a simple step.
 
P

PENG

Audioholic Slumlord
Irv's law of audio DBTs: If you can't discern any differences in a sighted test, a DBT will reveal no evidence of discerned differences either.

Since I dislike participating in DBTs, I would always precede a DBT with a sighted test, to potentially save a waste of time and effort.
That, I fully agree, as long as there are enough participants (with minimum hearing loss:D) present.
 
AcuDefTechGuy

AcuDefTechGuy

Audioholic Jedi
I think there's some confusion here about what blind testing is.....when it should be used....For the simple purpose of comparing two different amplifiers, a simple sighted test is good enough...But if you want to convince the doubters in this world, you better pay attention to blind testing and statistical significance.
I completely agree.

The definition of a "blind test" is unequivocal. It means the subjects do NOT know which amp, speaker, or whatever is being studied, whether it is drug studies in hospitals or amp studies or speaker studies. The subjects do NOT know. PERIOD.

Double-blinded test means BOTH the subjects and the administrators do NOT know. Only the persons preparing the tests know. For example, In a drug DBT, this would be the pharmacist preparing the drugs (that's me ;)).

Single-blinded test means only the subjects do NOT know.

When it SHOULD be used is equivocal. In audio, if it is for casual listening or social gathering, you usually don't do blinded testing, especially DOUBLE-BLINDED testing.

But if you want it to be taken SERIOUSLY for PUBLICATION for the whole world to see and you want people like Toole, Linkwitz, and all the major audio respected and published experts to take interest and use your studies as REFERENCE, then yes, you SHOULD do at least valid single-blinded studies.

There is this thing call "human bias" and unless you are gods almighty, you will not escape it regardless of your status in life.

But sure, for Summer shindig and sweet soirée, you probably "shouldn't" perform double-blinded tests. :D
 
Last edited:
Rickster71

Rickster71

Audioholic Spartan
That, I fully agree, as long as there are enough participants (with minimum hearing loss:D) present.
Absolutely. What's the point, if at first you don't test the testers?
 
cpp

cpp

Audioholic Ninja
I completely agree.

The definition of a "blind test" is unequivocal. It means the subjects do NOT know which amp, speaker, or whatever is being studied, whether it is drug studies in hospitals or amp studies or speaker studies. The subjects do NOT know. PERIOD.

Double-blinded test means BOTH the subjects and the administrators do NOT know. Only the persons preparing the tests know. For example, In a drug DBT, this would be the pharmacist preparing the drugs (that's me ;)).

Single-blinded test means only the subjects do NOT know.

When it SHOULD be used is equivocal. In audio, if it is for casual listening or social gathering, you usually don't do blinded testing, especially DOUBLE-BLINDED testing.

But if you want it to be taken SERIOUSLY for PUBLICATION for the whole world to see and you want people like Toole, Linkwitz, and all the major audio respected and published experts to take interest and use your studies as REFERENCE, then yes, you SHOULD do at least valid single-blinded studies.
.

So true, I read the very same thing on the internet so its got to be true:D the sad thing is, we can hash the "meaning" over and over but there still lies the simple question, when, where, who.. all you have right now, is forum talk. But as the guy said, "But if you want it to be taken SERIOUSLY for PUBLICATION", I say do it right with more than one or two of what your testing.
 
P

PENG

Audioholic Slumlord
Regarding subject amps for the tests, Gene mentioned Pass Lab and Classé so far. I think we need to be sure that any other amps involved are designed and built to be transparent, preferably Class A/AB and that they are not intended to have their signature sound like some of the Carver models and some tube amps. Other amps come to mind are 250WPC or above models from Outlaw, Bryston, Krell, ATI, Parasound, Anthem, Rotel etc., all claim to offer ruler flat 20 to 20,000, <0.05% THD, high enough damping factor, ultra low output impedance, and 4 ohm capability. Anthem and Parasound are good candidates because they offer both regular models and high end models in terms of SQ related specs, not power outputs.

I thought ADTG might have a good venue, potentially:D. He has demanding speakers such as the 802 Diamond and Salon2 as well as easier to drive ones all in the same room already. He has several amps that are known to be, or at least supposed to be transparent, such as the ATIs, a few mid range Denon AVR, and some pro amps I believe. We just need to bring a couple more such as the Classé, Pass Lab and Emotivas that Gene talked about. I would add at least something like a Parasound, Anthem and Krell, but no McIntosh. Heck he could sell tickets for such an event. I wonder if Gene would even consider that at all though.:D
 
Last edited by a moderator:
Swerd

Swerd

Audioholic Warlord
There's more to my blind statistics rant from last night…

Another source of confusion is over the use the word "bias". When most people say it, I think they mean attitudes unwarranted by facts, or prejudice. In statistics, there are several very different meanings (see Bias (statistics) - Wikipedia, the free encyclopedia). I'll quote it verbatim because I think using the word "bias" can lead to confusion when there are so many different meanings:

A statistic is biased if it is calculated in such a way that is systematically different from the population parameter of interest. The following lists some types of, or aspects of, bias which should not be considered mutually exclusive:

  • Selection bias, where individuals or groups are more likely to take part in a research project than others, resulting in biased samples. This can also be termed Berksonian bias.[SUP][1][/SUP]
  • The bias of an estimator is the difference between an estimator's expectations and the true value of the parameter being estimated.
    • Omitted-variable bias is the bias that appears in estimates of parameters in a regression analysis when the assumed specification is incorrect, in that it omits an independent variable that should be in the model.
  • In statistical hypothesis testing, a test is said to be unbiased when the probability of rejecting the null hypothesis is less than or equal to the significance level when the null hypothesis is true, and the probability of rejecting the null hypothesis is greater than or equal to the significance level when the alternative hypothesis is true,
  • Detection bias is where a phenomenon is more likely to be observed and/or reported for a particular set of study subjects. For instance, the syndemic involving obesity and diabetes may mean doctors are more likely to look for diabetes in obese patients than in less overweight patients, leading to an inflation in diabetes among obese patients because of skewed detection efforts.
  • Funding bias may lead to selection of outcomes, test samples, or test procedures that favor a study's financial sponsor.
  • Reporting bias involves a skew in the availability of data, such that observations of a certain kind may be more likely to be reported and consequently used in research.
  • Data-snooping bias comes from the misuse of data mining techniques.
  • Analytical bias arise due to the way that the results are evaluated.
  • Exclusion bias arise due to the systematic exclusion of certain individuals from the study.

I'm not trying to blow anyone away here, but if I understood statistics better, I might be able to say all that more simply. It's not a simple subject, and I only understand it superficially :cool:.
 
Last edited:
Swerd

Swerd

Audioholic Warlord
I think I also didn't adequately make my point in the paragraph where I said:
Statistics. Ugh! This is a subject worth entire academic departments, and I can't possibly deal with it adequately here. To decide how many repetitions of a listening test are enough, you must have an idea (from prior testing) just what percentage of listeners might hear a difference when there is a real difference. If two amps sound so similar that roughly 50% of listeners say they can hear a difference, you have to also ask how many hear a difference when both amps are identical. If that is also 50%, you will have to raise the bar pretty high before you can conclude that people really can hear differences. Once have an idea about that, then you need to look up statistics books and read all about what 95% confidence intervals mean :D. Then we can talk about how many trial repetitions are needed to make valid conclusions.
Maybe talking about specific examples could better illustrate my point. These are hypothetical examples – I'm just spit balling here.

Imagine a simple sighted amplifier listening test of 10 listeners, where 8 could hear a difference and 2 could not. Call this test A. At the same time, listeners were asked to if they could hear differences when the test amplifier was not changed. The results of that were 1 out of 10 (10%) listeners reported hearing a difference. These preliminary results suggest that 70% (80% – 10%) of listeners could hear differences.

Now imagine another similar test (test B) where 60% could hear differences, but when the amp was not changed, 30% could hear differences, suggesting that 60% – 30% = 30% can hear differences.

In a larger blind listening test, how many people should be tested to get results where statistics says we have 95% confidence that the results are not caused by any statistical bias? (See my above post about bias.)

If you believe test A, you could get away with about 25 people. If you believe test B, you may need 75 to 100.

If you can't go through that exercise and run enough tests to satisfy the demands of statistical significance, you're wasting your time. You might as well keep it simple, skip the blind testing, and have some fun.
 
Last edited:
AcuDefTechGuy

AcuDefTechGuy

Audioholic Jedi
I thought ADTG might have a good venue, potentially:D.
Yeah.......no......wait.......what? :eek: :D

No, no, no, no. I think Gene's $60,000 heavy load speakers is the better venue. ;)

Besides, Florida is where things are happening, not Krypton where things are exploding. :eek: :D
 
Last edited by a moderator:
B

Boerd

Full Audioholic
You can't hear the difference (with the assumption they have comparable specs, they are driven within specs and have comparable damping factor).
Simple as that.
If anybody thinks they hear 0.01 THD difference or 1-3 db SNR difference then please...
 
B

Boerd

Full Audioholic
True.

But if Audioholics were to perform a blinded test, it would be extremely valid and unequivocally more valid than any unblinded audio tests on earth. :D


Imagine most people saying "I have VERY good hearing, yet I can't make the difference between this 1k$ Denon/Emotiva and the 50 k$ Uberpriced amp"...:D
 
Last edited:
Irvrobinson

Irvrobinson

Audioholic Spartan
You can't hear the difference (with the assumption they have comparable specs, they are driven within specs and have comparable damping factor).
Simple as that.
If anybody thinks they hear 0.01 THD difference or 1-3 db SNR difference then please...
For short-term comparisons I lean your way, but simple specs like you're listing don't tell the whole story. The frequency distribution of noise and distortion orders make blanket statements tougher to defend. But again, I wouldn't bet a nickel on my ability to differentiate between excellent amps on demand. Aural memory just stinks. If only we could compare two audio presentations side-by-side the way we can with video.
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
I completely agree.

The definition of a "blind test" is unequivocal. It means the subjects do NOT know which amp, speaker, or whatever is being studied, whether it is drug studies in hospitals or amp studies or speaker studies. The subjects do NOT know. PERIOD.

Double-blinded test means BOTH the subjects and the administrators do NOT know. Only the persons preparing the tests know. For example, In a drug DBT, this would be the pharmacist preparing the drugs (that's me ;)).

Single-blinded test means only the subjects do NOT know.

When it SHOULD be used is equivocal. In audio, if it is for casual listening or social gathering, you usually don't do blinded testing, especially DOUBLE-BLINDED testing.

But if you want it to be taken SERIOUSLY for PUBLICATION for the whole world to see and you want people like Toole, Linkwitz, and all the major audio respected and published experts to take interest and use your studies as REFERENCE, then yes, you SHOULD do at least valid single-blinded studies.

There is this thing call "human bias" and unless you are gods almighty, you will not escape it regardless of your status in life.

But sure, for Summer shindig and sweet soirée, you probably "shouldn't" perform double-blinded tests. :D
And, you pass on the coded drugs that only you know what they are, placebo or the real stuff, to the administrator, than he in turn to the recipients in the study. At the end of th estudy, notes are compared which drug wuoked how effectively, etc.:D

... My guess is that Gene is not unbiased, though I mean this in a good and respectful way. I mean he has been in this business forever right.. .
One may believe that they are not biased but there is a subconscious aspect that one has no control over. That is also why the protocol of DBT. All these aspects took a long time to discover in the testing world, be implemented and accepted. But, as I mentioned I have no standing anywhere.;)
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
I agree, but Gene is not going to do a DBT so I am trying to suggest an alternate way to add a little more credibility to this eventuality.
Understand completely, or at least I am pretty sure I do.;) :D
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
There lies the kicker.. any "remarks" by the tester that could influence item 'A' or 'B' or what he/she likes well just screw up the whole test before it even starts... Just handle it like the Coke-Pepsi challenge where the can in our case the amps are not even identifiable and just marked with a letter 'A' or 'B'.

Or just bring the stuff to my home, I have a room off the HT room where my amps, etc are located. All you see are speakers and the screen... I say if you can't see the item being tested you can't have any perceived notion to base a decision on. Rather simple.
Better yet, just send the amps to me and say good by to them.;) :D No need to worry about any testing.:D

Oh, but wait, you guys would need to know where I live. :eek:
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
Imagine most people saying "I have VERY good hearing, yet I can't make the difference between this 1k$ Denon/Emotiva and the 50 k$ Uberpriced amp"...:D
I have a friend who was a small time reviewer whom I tired to convince to do a blind test. His response, something to the effect: what will my readers say if I cannot hear a difference. Yep, that is the crux of the matter.
 

Latest posts

newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top