Do modern amplifiers compress before clipping?

RichB

RichB

Audioholic Field Marshall
EDIT: A better title: Do modern amplifiers produce audible compression before audible distortion when they are clipped?

Most speakers do not require much power to play reasonably loud. But many struggle to determine the amount of power actually required.

There have been some good discussions at AH about the importance of loud speaker impedance curves and power phase angles to determine the load that presented to the amplifier. Some speakers clearly required more current to drive than can be determined by the manufacturer’s single impedance rating. A high current amplifier may be required.

From my reading, class A/B amplifier designs clip differently. Some clip with less artifacts by limiting the peak when sufficient power is not available without producing many additional artifacts. Such a design would make it more difficult to hear distortion when clipping occurs. However, the signal is still limited when power cannot be supplied.

Here is an interesting article about clipping that was attempting to determine why tweeters fail when amplifiers are seriously overdriven. Better amplifiers actually can sound very good when clipping peaks, but when severely over-driven; the author concludes that it is actually compression that is the likely cause of tweeter failer. The better the amplifier, the more likely it will be played into clipping without the listener hearing sustained audible distortion.

http://www.adx.co.nz/techinfo/audio/note128.pdf

My personal experience with some hard to drive speakers is that an AVR’s sounded a bit flat long before I heard objectionable distortion.

I expect modern AVRs to have low distortion and it may be designed to that limit distortion. So isn’t it reasonable to assume that driving some speakers, an AVR is more likely fail the produce the dynamics before producing large amounts of audible distortion?

If this is the case, than the argument for an out-board amp may not be about average volume but instead about preserving the dynamics in a system that is capable of producing them.

- Rich
</SPAN>
 
Last edited:
P

PENG

Audioholic Slumlord
Most speakers do not require much power to play reasonably loud. But many struggle to determine the amount of power actually required.

There have been some good discussions at AH about the importance of loud speaker impedance curves and power phase angles to determine the load that presented to the amplifier. Some speakers clearly required more current to drive than can be determined by the manufacturer’s single impedance rating. A high current amplifier may be required.

From my reading, class A/B amplifier designs clip differently. Some clip with less artifacts by limiting the peak when sufficient power is not available without producing many additional artifacts. Such a design would make it more difficult to hear distortion when clipping occurs. However, the signal is still limited when power cannot be supplied.

Here is an interesting article about clipping that was attempting to determine why tweeters fail when amplifiers are seriously overdriven. Better amplifiers actually can sound very good when clipping peaks, but when severely over-driven; the author concludes that it is actually compression that is the likely cause of tweeter failer. The better the amplifier, the more likely it will be played into clipping without the listener hearing sustained audible distortion.

http://www.adx.co.nz/techinfo/audio/note128.pdf

My personal experience with some hard to drive speakers is that an AVR’s sounded a bit flat long before I heard objectionable distortion.

I expect modern AVRs to have low distortion and it may be designed to that limit distortion. So isn’t it reasonable to assume that driving some speakers, an AVR is more likely fail the produce the dynamics before producing large amounts of audible distortion?

If this is the case, than the argument for an out-board amp may not be about average volume but instead about preserving the dynamics in a system that is capable of producing them.

- Rich
The answer is no, they don't compress before clipping. The linked article also said so, it clipped, then compressed if the user continue to turn the volume up beyond the clipping point. If you us Fourier analysis to breakdown the clipped waveform into individual harmonics, the low frequencies would be clipping long before the high frequency components do, so until then the amp would be compressing, as the author called it.
 
RichB

RichB

Audioholic Field Marshall
The answer is no, they don't compress before clipping. The linked article also said so, it clipped, then compressed if the user continue to turn the volume up beyond the clipping point. If you us Fourier analysis to breakdown the clipped waveform into individual harmonics, the low frequencies would be clipping long before the high frequency components do, so until then the amp would be compressing, as the author called it.
That makes sense.

The author stated that listeners can tolerate much more clipping than most people realize, so it seems reasonable that bass or other high energy passages may get compressed due to clipping and the listener may not be aware of it.

I find that interesting because it matches my experience.
Cranking a lower powered AVR has made the sound flatter before it sounded harsh due to excessive clipping.

There may be merit for some in considering a lower cost AVR and a separate amp that go beyond the simple math were an AVR may provide 100 watts or so and the dedicated amp at 200 watts. Current capabilities, dynamic power, may contribute to dynamics in a way that is not represented by 3DB.

- Rich
 
P

PENG

Audioholic Slumlord
That is true for people who need the power. I believe many of us, myself included, only listen to spl averaging typically between 75 to 85 dB at the most sitting at 3 to 4 meters from their not too hard to drive speakers. So for those people, most mid range to high end 100W AVRs may do the trick too. I do agree one should stick with AVR that has preouts if one wants to have the flexibility to use it in a larger room or change their listening habit down the road.
 
Swerd

Swerd

Audioholic Warlord
I think there is some confusion (maybe only on my part :)) about the term compression.

I think of compression as the limits to a loudspeaker's ability to produce louder sound with signals of increasing power. Imagine if we had a "perfect" amplifier that produced unlimited power with no distortion, and a "real world" loudspeaker. As power increases, the speaker gets louder in a predictable (linear or log-linear) fashion. At some power level, the speaker can't continue doing that – increased power no longer results in increased loudness. The speaker's response is said to be compressed. Sometimes I hear people refer to this as a speaker's "dynamic range".

All speakers have some power level at which they begin to compress. There are a variety of loudspeaker features that lead to compression, but the Rane article mentioned over-excursion of the voice coil/cone assembly and and voice coil heating. These will ultimately result in speaker failure, also known as thermal failure. (In my experience, when a speaker manufacturer lists an upper power limit for a speaker, it is usually a power at which thermal failure happens, and not the upper power limit of speaker's dynamic range.)

After reading the Rane article, I think it says "clipping" is what an amplifier does as it is over driven, and "compression" is what a speaker does when over driven. They're related, but not the same thing.

The article's main point was that woofers and tweeters don't necessarily have the same upper limits of response as power increases, nor do they have the same thermal failure range. A power level that leads to amplifier clipping at a woofer frequency can at the same time over drive a tweeter. The woofer produces sound from the clipped signal without failure, and the tweeter compresses and ultimately fails, without necessarily producing a clipped signal from the amp.
 
Last edited:
S

Stereojeff

Enthusiast
There are lot's of ways to skin a cat. While amplifiers themselves do not compress before clipping, some amplifiers are designed with built-in clip limiters. Done properly, these amps use a very sophisticated peak detection circuit. When output peaks are detected, an input level compressor is inserted into the circuit. When properly designed, the peak limiter can keep distortion below a predetermined figure--let's call it 3%--regardless of the level of the overdriven input signal. For example, ATI's AT602 and AT1202 have this feature. It is also available to our OEM customers. Some choose to use input limiting and some do not.

Amps 'R Us
Jeff
 
RichB

RichB

Audioholic Field Marshall
There are lot's of ways to skin a cat. While amplifiers themselves do not compress before clipping, some amplifiers are designed with built-in clip limiters. Done properly, these amps use a very sophisticated peak detection circuit. When output peaks are detected, an input level compressor is inserted into the circuit. When properly designed, the peak limiter can keep distortion below a predetermined figure--let's call it 3%--regardless of the level of the overdriven input signal. For example, ATI's AT602 and AT1202 have this feature. It is also available to our OEM customers. Some choose to use input limiting and some do not.

Amps 'R Us
Jeff
I guess there are some terminology issues.
An amplifier that is limiting the peaks with a compressor circuit is lopping off some of the dynamic range.

I have heard the Pioneer SC-07 in action it there was a point the it was flattening out the sound.
From what I have read, some class-D amps include limiters because they will not clip. So they advertise soft-clipping.
Since musical peaks are limited time duration, I would expect some amps to simply do not produce the full amplitude of the signal.

This is not the same compressing music where the soft sounds are made louder and the loud sounds softer.
In this case the peaks are made softer. Some people may have come to expect this behavior it only feels loud when this is at an extreme level. Thus, the author put forth the argument that a cleaner amp is turned up louder into peak limit and this can lead to speaker failure.

I have turned on the 4 ohm settings in Yamaha and Onkyo AVRs and it sound flat as well.

- Rich
 
P

PENG

Audioholic Slumlord
I think there is some confusion (maybe only on my part :)) about the term compression.

I think of compression as the limits to a loudspeaker's ability to produce louder sound with signals of increasing power. Imagine if we had a "perfect" amplifier that produced unlimited power with no distortion, and a "real world" loudspeaker. As power increases, the speaker gets louder in a predictable (linear or log-linear) fashion. At some power level, the speaker can't continue doing that – increased power no longer results in increased loudness. The speaker's response is said to be compressed. Sometimes I hear people refer to this as a speaker's "dynamic range".
I thought you understood right, except the Rane article referred to the low frequency signal started clipping, then compression starts but the high frequency components has not reached the clipping point and since people could not hear low frequency distortions (due to clipping) very well so they tend to crank the volume up to compensate and so the high frequency components would continue to climb until they reached the clipping point and perhaps only then people could hear the distortion and therefore stopped turning the volume up. The tweeter would have blown from thermal and/or excessive cone excursion long before the amp has reached the point where it would begin to clip the high frequencies as well.

After reading the Rane article, I think it says "clipping" is what an amplifier does as it is over driven, and "compression" is what a speaker does when over driven. They're related, but not the same thing.
Exactly, but once the low frequency components reached the clipping point, as Rane said, it would start to compress because the amp could not go further, the waveform would get flattened, and that is truly compression by definitions (including yours) that's what I said to Rich too in post#2 and he agreed in his post#3.:)

The article's main point was that woofers and tweeters don't necessarily have the same upper limits of response as power increases, nor do they have the same thermal failure range. A power level that leads to amplifier clipping at a woofer frequency can at the same time over drive a tweeter. The woofer produces sound from the clipped signal without failure, and the tweeter compresses and ultimately fails, without necessarily producing a clipped signal from the amp.
That's where you might be slightly off, just a little. The article did not say the tweeter would fail on compression, he said the woofer did, and that would happen before the high frequencies reached the clipping point. The problem is that the tweeter would have been overdriven already well before the amp begins to clip the high frequencies because the power handling capability of tweeters are much lower. So when he said compression is the real cause he referred to the low frequencies compression.

To summarize, the failure sequences would seem to be: Amp clipped LF components, user increases volume, Amp compresses LF after clipping point is reached, user continue to increase volume for louder music, LF continues to compress, HF volume goes up dB by dB and then fail on thermal and/or voice coil over excursion before Amp reaches clipping point for the HF). More detailed account as below:

1) The person crank the volume up for louder sound and/or more bass.
2) If the amp is not powerful enough for the user, the person would therefore crank it up too high such that the amp would clip the low frequencies but not yet the high frequencies because typical music contents have much lower high frequency energies.
3) At that point the owner could not hear the LF distortion too well so he kept on increasing the volume.
4) Now the amp would have no choice but compresses because it could not increase the LF component voltage further.
5) Again, the amp would not be clipping the higher frequencies so if the amp is rated 100W, the amp would have reach its voltage limit for the low frequencies but not the high frequencies until the HF components also reach say the 100W mark (whatever voltage needed at that point).
6) So the user would continue to increase the volume until he could no longer stand the audible distortion (say by the distorted high frequencies from clipping), but before that point is reached, the amp's output voltage would have put too much energy into the tweeter and cause it to fail, that is, the tweeter is just overdriven by an unclipped or mostly unclipped signal.

In other words, he does believe it is overpowering that kills tweeters, not underpowering and he recognize why people were led to believe it was the other way round. He also believes more powerful amps tend to not damaging tweeters as often because more powerful amp would have lesser chance to cause compression in woofers, so users would get the loudness they want, without having to crank to volume up under the misconception that the SPL would increase proportionally when in fact only the high frequencies would increase in SPL while the low frequencies SPL would remain flat, hence compressing.
 
Last edited:
RichB

RichB

Audioholic Field Marshall
^^^
Peng, your the man. :)
Great summary.

If this is correct, there are likely some folks that may drive their AVRs into compression who may benefit from a more powerful amp, even though, it sounds clean and clear.

This is another interesting article that was discussed in another thread about phase angle.
I do not really grasp it all but some EE's here get it.

Phase Angle Vs. Transistor Dissipation

It is not uncommon for speakers to have significant phase angle shifts across the frequency range.
At 45 degree angles the Power Factor is 2 and, if I read this correctly, twice the power is needed but much is dissipated into the heat sinks.
Still the power supply must produce the power.

At the end of the article he says this:

The thing that saves some amplifiers is the power supply impedance, and careful design (hint - the cheapest alternative) ensures that there is enough power available for transients, but it will collapse sufficiently to allow for worst case conditions. This is not a good method to rely on.
Some commercial amplifiers use a tapped power transformer, and have settings for 8 and 4 ohms. The voltage is reduced for 4 ohm operation to make sure that the transistor SOA is not exceeded. Others take a more simplistic approach (many subwoofer amps fall into this category), where the transformer is simply too small for the job. If loaded heavily and driven hard, the supply voltage will collapse because of the under-rated transformer, and the amp will survive. Fortunately, music is dynamic, so the transformer will not have to suffer a sustained overload, and will usually live a long and happy life.
If an amplifiers transformer's voltage collapses due to tough phase angle, wont that also result in compressing that signal?

- Rich
 
Last edited:
Swerd

Swerd

Audioholic Warlord
Peng – thanks for the lengthy explanation.

The short version… drunk turns up volume too high… blown tweeter

Compression has too many definitions in audio:

Speaker compression – when a speaker is driven beyond its linear response range
Digital compression – as in various lossy digital audio formats
Dynamic Range Compression – when vinyl records were mastered with dynamic range limited to ~70 dB to prevent the needle from jumping the groove.
Others?
 
P

PENG

Audioholic Slumlord
^^^
Peng, your the man. :)
Great summary.

If this is correct, there are likely some folks that may drive their AVRs into compression who may benefit from a more powerful amp, even though, it sounds clean and clear.

This is another interesting article that was discussed in another thread about phase angle.
I do not really grasp it all but some EE's here get it.

Phase Angle Vs. Transistor Dissipation

It is not uncommon for speakers to have significant phase angle shifts across the frequency range.
At 45 degree angles the Power Factor is 2 and, if I read this correctly, twice the power is needed but much is dissipated into the heat sinks.
Still the power supply must produce the power.

At the end of the article he says this:



If an amplifiers transformer's voltage collapses due to tough phase angle, wont that also result in compressing that signal?

- Rich

- Rich
That's one of the best write up that tried to explain the effect or phase angles. It was also the one I linked to Steve81 to finally convince him phase angle did matter, and needed to be considered when sizing amps. Yes the power supply must produce the power regardless of whether it gets dissipated in the power transistors or the loudspeakers. The concern is not so much the power supply but that the power transistor circuit that would have to deal with potentially much higher than normal power dissipation when the phase angle is bad.
 
Last edited:
Swerd

Swerd

Audioholic Warlord
That's one of the best write up that tried to explain the effect or phase angles…
Agreed. Phase angle vs. frequency should always be considered by a speaker designer to avoid generating those large impedance phase angle swings.

Transmission line enclosures and ribbon tweeters contribute to well behaved impedance (red) and impedance phase (green) curves. For example, the Philharmonic 3:

 
KEW

KEW

Audioholic Overlord
In other words, he does believe it is overpowering that kills tweeters, not underpowering and he recognize why people were led to believe it was the other way round. He also believes more powerful amps tend to not damaging tweeters as often because more powerful amp would have lesser chance to cause compression in woofers, so users would get the loudness they want, without having to crank to volume up under the misconception that the SPL would increase proportionally when in fact only the high frequencies would increase in SPL while the low frequencies SPL would remain flat, hence compressing.
I hope this is not viewed as a hijack of the thread, but I wanted to comment on the well established alternative (on AH forum, at least) of adding a powered sub to reduce the load on the AVR. For someone without preamp outputs this may be a more prudent option than having to replace the AVR and add an amp.
 
P

PENG

Audioholic Slumlord
I hope this is not viewed as a hijack of the thread, but I wanted to comment on the well established alternative (on AH forum, at least) of adding a powered sub to reduce the load on the AVR. For someone without preamp outputs this may be a more prudent option than having to replace the AVR and add an amp.
Agree, and ADTG has made that same point multiple times in various threads.
 
TLS Guy

TLS Guy

Audioholic Jedi
Agree, and ADTG has made that same point multiple times in various threads.
The problem is, that is not true. I have the ability to look carefully at the energy spectrum of music. The energy rapidly falls off below 100 Hz. However most subs are inefficient, sealed ones extremely so. If you have a speaker system like I do, you can have bass aplenty with very moderate powers. The real power range is 100 to 1500 Hz, especially 200 to 1000 Hz. Quite a lot of sources have significant power out to 2.5 KHz.

As far as amps are concerned clipping is clipping, and will produce distortion products. Class A amps largely even harmonics. Class AB amps largely odd order harmonics, which are more unpleasant and damaging. A class AB amp will be pure class B at clipping.

There is only one wave form. So it does not matter which part of the frequency spectrum causes the clipping, everything clips. Clipping occurs when there is insufficient voltage required to produce the required current to the load.

Now quite a few amps these days have current limiting circuits, to prevent amp destruction. If you limit current, then voltage falls and so you have clipping. If that clipping occurs sooner than it otherwise would without the current limiter, and that is the usual situation, then the speakers are at greater risk.

We seem to have a trend to more tweeter failures of late, but I think that is due to speaker design issues.

Wide bandwidth drivers are rare and expensive. In general the cheaper the driver, the earlier you start getting break up peaks and beaming. So in the cost sensitive age it is tempting to lower the crossover point. In fact many advocate this. It is a bad idea. There is a lot of power out to 2.5 kHz and lower crossover points on the tweeter put more power to the tweeter than is advisable. I never cross lower than 2.5 kHz and try and get it to 2.8 or 3 kHz.

This is one of the reasons why effortless reproduction of the high peaks require expensive speakers and powerful amplification. I don't see that changing any time soon.

The reproduction flattening can also be caused by driver thermal compression in addition to the amp starting to run out of gas.
 
P

PENG

Audioholic Slumlord
Well, with all due respect, you need to have a good understanding of Fourier theory to understand that Rane article. I can tell you the author knows what he was talking about.
 
TLS Guy

TLS Guy

Audioholic Jedi
Well, with all due respect, you need to have a good understanding of Fourier theory to understand that Rane article. I can tell you the author knows what he was talking about.
I did not say he didn't. As I said a crude limiter circuit, will make matters worse and he showed that. In his example the amp had to have one of these so called soft clip circuits, which I think is a misnomer. The amp designer basically wants your speakers to blow rather than have the amp returned under warranty.

Obviously things have changed. We have always had an incidence of tweeter failures, but now we get posts showing passive high pass tweeter circuits totally blown out. That has to take enormous power in the HF band where it does not belong.

So if limiters are to be used, it must not be done crudely, but done with sensible attack and release. That will really drive up amp costs. Even then I think it would only be bullet proof with the speaker and amps are designed as a unit. This should be the direction anyway, as I have stated for years. Amps really do belong in the speakers and not in a receiver box
 
P

PENG

Audioholic Slumlord
The problem is, that is not true. I have the ability to look carefully at the energy spectrum of music. The energy rapidly falls off below 100 Hz. However most subs are inefficient, sealed ones extremely so. If you have a speaker system like I do, you can have bass aplenty with very moderate powers. The real power range is 100 to 1500 Hz, especially 200 to 1000 Hz. Quite a lot of sources have significant power out to 2.5 KHz.
We are not saying that it is a total solution but it would certainly help to a point. Obviously how much subs help would depend on the spectrum of the music/movies contents, and that of course is a variable. That's why KEW referred to scenarios where the AVRs have no pre-outs.


Class A amps largely even harmonics. Class AB amps largely odd order harmonics, which are more unpleasant and damaging. A class AB amp will be pure class B at clipping.
According to Nelson Pass: "Anecdotally, it appears that preferences break out roughly into a third of customers liking 2nd harmonic types, a third liking 3rd harmonic and the remainder liking neither or both. Customers have also been known to change their mind over a period of time."


There is only one wave form. So it does not matter which part of the frequency spectrum causes the clipping, everything clips. Clipping occurs when there is insufficient voltage required to produce the required current to the load.
No, when that waveform clips it does not mean "everything clips", it clipps certain frequencies not not necessary all, though of course it may. Before Fourier, people might have been led to believe what you are saying now, but Fourier allows us to analyze waveforms by representing it with a series (could be infinite) of harmonics. As the Rane articles cited in his example, when the LF causes an amp to output 40V pk, the HF components of that 40V pk may be at 9 to 13V pk so no, in that case the amp would not have clipped the HF. The waveform you see at the output of the amp needs to be looked at with a spectrum analyzer, or just trust Fourier transforms, the telecommunication industry lives by it. And of course you know that and we are probably just miscommunicating with each other.:D
 
Last edited by a moderator:

Latest posts

newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top