Why Audio Amplifiers Can Sound Different

Do you think amplifiers can sound different?

  • Yes. Count me in!

    Votes: 27 77.1%
  • No way, not unless they are being overdriven.

    Votes: 5 14.3%
  • What did you say? I can't hear too good.

    Votes: 3 8.6%

  • Total voters
    35
RichB

RichB

Audioholic Field Marshall
My advice? Carefully consider all this stuff before spending money on heavily promoted class a, for the sake of class a, as they all don't perform the same trickery.

Now let the red chicklets fly.
You can measure reduced crossover IM distortion with some of that class-a trickery :)

The red chicklets are so last forum :p :D

- Rich
 
G

Goliath

Full Audioholic
The problem is that for long-term listening I find that I suffer much less from listening fatigue with some electronics than I do with others, especially amplifiers. That tells me that blind testing might be masking critical information. I might just be imagining things, but I can't explain the effect any other way.
If the differences were so small that simply 'not knowing' would mask them, then how significant do you think those differences were to begin with?

It's an illogical objection that is more related to the fear of failure of substantiating questionable claims, which is why placebophiles dredge up any excuse possible in order to wriggle their way out of the test.

There is no evidence that blind testing desensitises the listeners ability to hear audible differences, (actually the opposite takes place) assuming any differences truly existed, unless they were a figment in the mind of the listener.

Audiophiles claim to be 'just listening' in a sighted test or say things like 'trust your ears', but then need to see peek while listening, have apriori knowledge while listening, are influenced by a plethora of cognitive biases while listening, influenced by visual and non-audio cues, by listening. Clearly, they are not 'just listening'.

So when it comes right down to it, any objection that is typically made is just an excuse on the part of the listener as to why he shouldn't test his own claims, either because he/she knows their claims are bogus, or due to the fear of being tested.
 
G

Goliath

Full Audioholic
For the record, I'm firmly of the belief that amplifiers can sound different, so I agree with Gene and others.

Whether amplifiers *will* sound different or not is an entirely different question that is conditional on so many variables.
 
Irvrobinson

Irvrobinson

Audioholic Spartan
If the differences were so small that simply 'not knowing' would mask them, then how significant do you think those differences were to begin with?

It's an illogical objection that is more related to the fear of failure of substantiating questionable claims, which is why placebophiles dredge up any excuse possible in order to wriggle their way out of the test.

There is no evidence that blind testing desensitises the listeners ability to hear audible differences, (actually the opposite takes place) assuming any differences truly existed, unless they were a figment in the mind of the listener.

Audiophiles claim to be 'just listening' in a sighted test or say things like 'trust your ears', but then need to see peek while listening, have apriori knowledge while listening, are influenced by a plethora of cognitive biases while listening, influenced by visual and non-audio cues, by listening. Clearly, they are not 'just listening'.

So when it comes right down to it, any objection that is typically made is just an excuse on the part of the listener as to why he shouldn't test his own claims, either because he/she knows their claims are bogus, or due to the fear of being tested.
Being the author of your quote, there are many phenomena affecting human senses that have longer-term rather than immediate effects. For me, for example, lights that have subtle flashing. I may not notice it immediately, but over a period of an hour it'll give me a headache and make me nauseous.

As for your accusation of being afraid to be proven wrong, you're wrong.

Having sat through many single blind tests and two double blind tests, and having spoken to the other participants each time, there was so much simple guessing going on that I've concluded for human hearing testing for *subtle* differences, specific comparison testing is crap. For the big stuff, like you sometimes hear between speakers, yeah, comparison testing works. For subtle differences, I've never met a human that reliably performed in a comparison test. So unless you want to argue that subtle differences are irrelevant we don't agree.
 
Swerd

Swerd

Audioholic Warlord
Having sat through many single blind tests and two double blind tests, and having spoken to the other participants each time, there was so much simple guessing going on that I've concluded for human hearing testing for *subtle* differences, specific comparison testing is crap. For the big stuff, like you sometimes hear between speakers, yeah, comparison testing works. For subtle differences, I've never met a human that reliably performed in a comparison test. So unless you want to argue that subtle differences are irrelevant we don't agree.
Logically, this is throwing out the baby with the bath water. If audible differences are so small that listeners are simply guessing, the unavoidable conclusion is they could hear no differences.

What nearly all audio listening tests lack is a good positive control that establishes just what is the lower limit that listeners can reliably hear.

See posts #24 and #28 in this thread for a good example of positive controls in a listening test: http://forums.audioholics.com/forums/threads/the-stock-power-cord-fan-club.94295/page-2#post-1079131. TLS Guy described Peter Walker's listening tests of tube vs. solid state amps. During that test, he showed that listeners could not hear differences between amps as long as their THD was less than 2%.

Another unintentional example I once saw was in a blind test of different capacitors in passive speaker crossovers. Could anyone reliably hear differences differences when capacitors varied between cheap and high-priced boutique caps? One person there adamantly insisted he could easily hear differences during the test. When the results were displayed, they showed clearly that people were guessing right half the time, and wrong half the time. No one, among some 40 DIY speaker builders, could reliably hear differences. This one listener wouldn't stop complaining about how the blind aspect of the test dulled his ability to hear fine differences.

Earlier that day, this same guy had shown a 2-way speaker he had built. It had expensive Seas drivers and looked nicely built. Unfortunately, while working late the night before, he had mistakenly wired one of his speakers with the tweeter in opposite polarity to the woofer, creating a large suck out across the crossover frequency range. One of the more experienced DIY builders there easily heard it, and gently pointed it out while trying to avoid embarrassing the guy. Not surprisingly, that poor guy hadn't noticed anything wrong with the sound of his babies.

Conclusion
: Because this guy could not hear when a woofer and tweeter were out of phase with each other, could he reliably hear differences between different capacitors in otherwise identical speakers? Highly unlikely.
 
Last edited:
Irvrobinson

Irvrobinson

Audioholic Spartan
Logically, this is throwing out the baby with the bath water. If audible differences are so small that listeners are simply guessing, the unavoidable conclusion is they could hear no differences.
I think the bath water is bad. I've been pretty specific about issues with audio comparisons. If you don't agree I'm comfortable with that. I think any "testing" where most of the data is just guessing is silly.
 
Swerd

Swerd

Audioholic Warlord
I think any "testing" where most of the data is just guessing is silly.
I certainly agree with that. If people couldn't reliably hear differences and could only guess, there won't be any useful conclusions.

That's why there must be positive and negative controls in these tests. That's why I liked that Peter Walker test that TLS Guy described. It clearly showed that blinded listeners can hear the difference between amps if one of them has more than 2% THD. If differences are less than that, people cannot hear the difference.

Just because a listening test is done under blinded conditions doesn't automatically make it a valid test. It has to establish just what the listeners can detect with the listening conditions available during the test.
 
G

Goliath

Full Audioholic
Irvrobinson said:
Having sat through many single blind tests and two double blind tests, and having spoken to the other participants each time, there was so much simple guessing going on that I've concluded for human hearing testing for *subtle* differences, specific comparison testing is crap.
There are plenty of examples of positive results that can be found for subtle differences in blind testing. Of course, whether the results agree with your preconceptions are another story.

For the big stuff, like you sometimes hear between speakers, yeah, comparison testing works. For subtle differences, I've never met a human that reliably performed in a comparison test. So unless you want to argue that subtle differences are irrelevant we don't agree.
Subtle differences of what? I never said that subtle differences are irrelevant, but to claim that people can't reliably hear subtle differences in a 'comparison test' (what is that supposed to be? ABX?) is nonsense.

If a listening blind can result in a masking of audible differences which is a condition where bias and prejudice are minimised, aiding in listener sensitivity to subtle differences, I can't even imagine how much masking must take place in the cesspool of bias, daydreams, peeking around, hearing whatever you want, mood, non-audio cues, visual cues, in a sighted comparison! LOL.

By minimising the above variables from affecting the listener, hearing sensitivity can only improve because the listener is 'only listening' and 'trusting their ears', not their eyes, and ears, and expectations, and preconceptions.

Even if your position were true that blind testing could possibly make a person 'not hear' a subtle difference, a sighted comparison does no better! Given the ..er, vagaries of human perception.
 
J

jkenny

Enthusiast
Goliath,
I think one of the points you might be missing is that as Swerd said "Just because a listening test is done under blinded conditions doesn't automatically make it a valid test. It has to establish just what the listeners can detect with the listening conditions available during the test."

What I think is underlying this quote is that we can't guarantee the necessary conditions that are required for a blind test unless we are doing it following the recommended guidelines established in BS.1116 or other ITU guidelines for blind testing.

Most blind testing that we see on forums falls far, far short of these guidelines. So the typical home test conditions are so varied & the listeners & playback system so unqualified (unqualified in the sense that we don't know what level of difference is actually discernible) that we need some self-reporting measures included in the test to verify how valid it is for measuring the sort of differences being examined.

Let's take ABX testing as the typical blind tests we see on audio forums. The definition of this test has been quoted to me on Hydrogen audio as ""ABX is a minimalistic method to test for false positives" - in other words it minimises hearing a difference when it's not there. (Yes, I'm involved in an ABX thread on HydrogenAudio much along the same lines as here but more vicious & personal). This sounds like a very reasonable approach but let's look at the implications. What if the test is so organised that only really gross differences will be noticed i.e the listeners or test conditions don't allow for recognition of subtle differences - maybe the playback system has a level of noise that masks such subtleties, maybe the background noise does, whatever? Does the ABX test results coming from such a test give any indication that such a condition pertains i.e that subtle differences aren't possible to be heard with this particular test run.

This is not condemning all ABX test as many professionally run tests follow the recommended guidelines contained in the ITU documents but it is suggesting that we need to pay attention to ""Just because a listening test is done under blinded conditions doesn't automatically make it a valid test."

My feeling with the results from these home run tests is that we have a very wide range regarding the sensitivity of the test - some might be sensitive enough to reveal subtle differences but I suspect most are only revealing of gross differences. Obviously this will result in a much higher number of "dubious" null results lumped in with "valid" null results.

Now, I know the argument that null results don't prove anything but the fact of the matter is that the more null results that are accumulated, for say blind testing amplifiers, the more the people who are "on the fence" believe that there is no audible difference. This sets up an negative expectation bias which only internal controls will have any hope of picking up.

Jakob1983, on that HA thread said something which I've thought is the best way to do such tests - invisibly i.e the participants don't know they are doing a test. But this is extremely hard to organise. However, it would cater for any negative expectation bias - there can be no expectation if the listener doesn't know what's being tested.

BTW, from examples I've seen, the power of this bias is stronger than sighted bias. There is a series of 4 get-togethers documented on Pinfishmedia forum of 4 listening tests run over the course of a year or so. The main organiser of the get-togethers was a guy called Vital & the threads will be found under the thread title "DBO I" or II or II or IV. Anyway, over the course of the get-togethers they were listening sighted & blind to different DACs. In total maybe 30-40 people were involved between all tests & maybe 10-15 DACs. Not until the final get-together did they hear a difference between DACs & that seemed to be because someone at the event was able to point out what to listen for. Vital, who would be an open-minded objectivist said that the blind tests null results were part of the reason why he couldn't hear any differences. I just looked into that forum to check the name of the threads & saw an ABX thread on which he restated his opinion again today (I can't post a link)
"I think ABX shows us that the differences heard are much less than reported in sighted tests, but that the process can in itself hide the fact that differences do exist."

I think everyone should learn quite a lot from this.
One of the important things to realise from these meetings & you can read it in their post meeting write-ups, something was preventing them from hearing differences during sighted listening. Was it this negative expectation bias or was it not knowing what to listen for? According to Vital, it was the blind testing which was to blame, to a large extent. I suspect they were demotivated in trying to hear differences,

Now, I don't expect home run blind tests will read or follow ITU recommended guidelines so I favour again what Swerd said "That's why there must be positive and negative controls in these tests." These internal controls, I believe, could allow us to get a handle on the validity of such tests. Without these controls we are seeing results & have no way of evaluating how valid they are for revealing audible differences & to what level.

It seems to me the only logical way to deal with the huge variability of the conditions under which these tests are run. If the conditions under which tests are run can't be controlled then some way of verifying that the test can do what it is supposed to do, is needed i.e a way of verifying that this run of the test is actually capable of revealing an acceptable level of differences.
 
Last edited:
J

jkenny

Enthusiast
The problem is that for long-term listening I find that I suffer much less from listening fatigue with some electronics than I do with others, especially amplifiers. That tells me that blind testing might be masking critical information. I might just be imagining things, but I can't explain the effect any other way.
If the differences were so small that simply 'not knowing' would mask them, then how significant do you think those differences were to begin with?
I'm not sure that this is logically correct, Goliath - your making the assumption that the test is not masking differences & therefore "differences were so small". Could it not be that the test itself has a masking affect on subtle or less than subtle differences? Without controls we have no measure of what level of differences that test is capable of revealing.

Now again, we are not talking about blind tests which we see in research papers, we are talking about the usual stuff seen on forums.

Can you say that the only difference between long-term listening & a blind test is 'not knowing'? Is that the only factor that has a bearing on the participants & the results
 
J

jkenny

Enthusiast
Having sat through many single blind tests and two double blind tests, and having spoken to the other participants each time, there was so much simple guessing going on that I've concluded for human hearing testing for *subtle* differences, specific comparison testing is crap. For the big stuff, like you sometimes hear between speakers, yeah, comparison testing works. For subtle differences, I've never met a human that reliably performed in a comparison test. So unless you want to argue that subtle differences are irrelevant we don't agree.
I have made the same point myself - that fatigue, loss of focus & second guessing oneself are all factors that come into play in blind tests. In my experience & readings of blind tests, people often start off fairly confident & seem to be sure that they hear differences but then doubt sets in & they begin to be less sure & second guess themselves.

I think it would be interesting to analyse ABX test trials to see if there was a higher occurrence of correct trials earlier in the test than later in the test i.e if the trials 1 to 5 showed more correct answers than trials 10 to 16 (in a 16 trial run).

Again, there is no way to know from the results of an ABX test if someone is just not listening, just hitting random keys. Actually I tell a lie, there is one way that I have seen - the log of the ABX test showed timings which meant that some trials took 1 second, some more took 2 seconds, some more 3 seconds & some 4 seconds - the rest of the total of 16 trials were not indicative of this form of "cheating". The person who posted these null results didn't mention any issue about them & only when questioned finally admitted that he had just hit keys. His excuse being that he had run the test earlier & couldn't tell any difference but had overwritten the log :)

Anyway, the point again is that without proper internal controls we have no way of catching this sort of intentional random guessing or the more difficult, unintended random guessing due to unnoticed loss of focus, tiredness, etc.

Now, I have been told that trying to introduce controls in ABX tests would be impossible for a number of reasons. If that is truly the case then ABX testing should only be used in situations where the attempts at following the BS.1116 recommendations is adopted & documented.
 
G

Goliath

Full Audioholic
I'm not sure that this is logically correct, Goliath - your making the assumption that the test is not masking differences & therefore "differences were so small". Could it not be that the test itself has a masking affect on subtle or less than subtle differences? Without controls we have no measure of what level of differences that test is capable of revealing.

Now again, we are not talking about blind tests which we see in research papers, we are talking about the usual stuff seen on forums.

Can you say that the only difference between long-term listening & a blind test is 'not knowing'? Is that the only factor that has a bearing on the participants & the results
You are posting on Hydrogen Audio right now, aren't you? Of course you are! ;)

Your objections have been dealt with over there in detail, over the course of 50 pages. No need to retread the same points yet again.
 
G

Goliath

Full Audioholic
jkenny said:
I have made the same point myself - that fatigue, loss of focus & second guessing oneself are all factors that come into play in blind tests.
Since I can't help myself, do you think fatigue, loss of focus and second guessing does not apply in casual sighted listening where strenuous critical listening peeking is involved? :)

If you can hear a subtle difference in a sighted test, what mechanism allowed you to hear the subtle difference in the sighted test that automagically prevents you from hearing the subtle difference in a blind test?

Lack of 'stress'? Seeing peeking to see the component? How does stress levels increase when not peeking and can one peek, stress free? I may have forgotten other excuses, can you help me out here?

You are doing well over at HA. Just continue at it. Another 50 pages and I'm sure you'll find your way.
 
highfigh

highfigh

Seriously, I have no life.
This one listener wouldn't stop complaining about how the blind aspect of the test dulled his ability to hear fine differences.

Earlier that day, this same guy had shown a 2-way speaker he had built. It had expensive Seas drivers and looked nicely built. Unfortunately, while working late the night before, he had mistakenly wired one of his speakers with the tweeter in opposite polarity to the woofer, creating a large suck out across the crossover frequency range. One of the more experienced DIY builders there easily heard it, and gently pointed it out while trying to avoid embarrassing the guy. Not surprisingly, that poor guy hadn't noticed anything wrong with the sound of his babies.

Conclusion
: Because this guy could not hear when a woofer and tweeter were out of phase with each other, could he reliably hear differences between different capacitors in otherwise identical speakers? Highly unlikely.
They took away his ear goggles? Those bastiges!

I know someone who had a recording studio and when I was working for a car audio shop, he called to ask if I could bring the RTA. I did, but before it was turned on, I said that something was reverse-phase because I could feel that sensation. Annoys me to death!

We fired it up and immediately saw a deep, narrow V at about 2KHz, corresponding to the crossover region. Turns out, someone had replaced a horn drive and didn't pay attention to the polarity. Once it was reversed, the problem was solved.

The guy with the two-way may have just been so tired and his ears fatigued that nothing sounded good, bad or indifferent. Been there, done that.
 
J

jkenny

Enthusiast
You are posting on Hydrogen Audio right now, aren't you? Of course you are! ;)

Your objections have been dealt with over there in detail, over the course of 50 pages. No need to retread the same points yet again.
Aha, as RichB guessed - you are a HA member but using a different member name here. Care to tell us your member name on HA?

It's not difficult to know that I'm also on HA as I use the same username. It would appear you don't - care to tell us your HA username?

I think this is a different audience & they can read your reply here should you decide to make one but to answer that my objections were already dealt with in detail on HA is incorrect - just a lot of the same sort of claims made on HA as you just made here - that such & such has already been dealt with.

I outlined my points where I thought you were wrong in what you said - that's fine if you don't want to deal with my points - that's up to you - I have no problem with that
 
Last edited:
J

jkenny

Enthusiast
Since I can't help myself, do you think fatigue, loss of focus and second guessing does not apply in casual sighted listening where strenuous critical listening peeking is involved? :)
Well, casual sighted listening usually doesn't involve repeated listening to the same two short snippets of music over & over for 16 trials with the requirement to make a decision on whether there is a difference between them, each time. So the answer would be, no :)

If you can hear a subtle difference in a sighted test, what mechanism allowed you to hear the subtle difference in the sighted test that automagically prevents you from hearing the subtle difference in a blind test?
I believe this has been answered many times already - the listening conditions & internal factors change when you go from casual sighted listening (I note HA doesn't deem this a "test") to a blind "test".

Lack of 'stress'? Seeing peeking to see the component? How does stress levels increase when not peeking and can one peek, stress free? I may have forgotten other excuses, can you help me out here?

You are doing well over at HA. Just continue at it. Another 50 pages and I'm sure you'll find your way.
I think it's already answered - moving from casual listening to blind "test" is a change in internal psychology among other things.
It's a bit like asking where does the stress come from between reading a text book before an exam & doing an open book exam
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
...

If you can hear a subtle difference in a sighted test, what mechanism allowed you to hear the subtle difference in the sighted test that automagically prevents you from hearing the subtle difference in a blind test?

...
Very good question but the answer is not what he is looking for. ;)
The eyes gives you incredible help in differentiating between products. :D
 
Irvrobinson

Irvrobinson

Audioholic Spartan
Very good question but the answer is not what he is looking for. ;)
The eyes gives you incredible help in differentiating between products. :D
No, it's an irrelevant question, and it has nothing to do with what I said. What I said was that blind testing between electronics always, in my experience, resulted in mere guessing by the participants, so IMO the results were bogus. I did not say that sighted testing was the answer. I did say that I have personally found that I sometimes, not all of the times, develop preferences for different electronics because I find myself having less listening fatigue. I have never said that is an indicator of a product's superiority, just that I sometimes developed a preference. I have never been able to differentiate between well-designed electronics in sighted or blinded comparison tests.
 
J

jkenny

Enthusiast
Simple statements like "how come a subtle difference disappears when you are prevented from peeking" is one side of the coin & it ignores the other side.

Nobody is denying that some sighted differences disappear when blind tested. Now there are two possibilities in this outcome - the perceived difference isn't real & the blind test correctly uncovered this - the perceived difference is real & the blind test masked this (called a false negative)- the challenge is to find out which is correct.

In order to test this you cannot use a test that has no measure of the level of false positives (sorry, typo - should be false negatives) the test returns. The ABX test is one such test.

In order to determine whether a test result is correct or a false negative one needs to know the sensitivity of the test. To determine the sensitivity of a test requires one to know both the false positive & false negative levels.

This is why BS.1116 has the following recommendation on the use of internal controls in blind tests
A major consideration is the inclusion of appropriate control conditions. Typically, control conditions include the presentation of unimpaired audio materials, introduced in ways that are unpredictable to the subjects. It is the differences between judgement of these control stimuli and the potentially impaired ones that allows one to conclude that the grades are actual assessments of the impairments.
 
Last edited:
newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top