How does a lower impedance load increase the damping factor? Inquiring minds want to know.
http://www.crownaudio.com/media/pdf/amps/damping_factor.pdf
Damping Factor is the total impedance of the load (loudspeaker, crossover elements, and both legs of the speaker cable, usually) divided by the output impedance of the amplifier (very low, typically, in Solid State amplifiers, and low but much higher than SS with Vacuum State amplifiers).
It is a purely resistive calculation, although it varies with frequency, it is unaffected by Phase Angle.
For example the Audio Research Reference 75 (Stereophile Review; Measured Values):
" ...
The output impedance was relatively low for an amplifier using a single pair of output tubes, at 0.74 ohm at 20Hz and 1kHz, and 0.95 ohm at 20kHz from the 4 ohm transformer tap.
From the 8 ohm tap these impedances were 1.25 and 1.56 ohms, respectively. ..."
From the above you can see that the Damping Factor is also frequency dependent, and as is typical, is lowest at low frequencies (where it's actually needed, according to theory) and highest at high frequencies (where it plays no, or almost no, role). This is true for Solid State amplifiers as well.
A typical output impedance for a Solid State amplifier is perhaps 0.10 ohm or less, often much less.
So, @ 20 Hz the AR R75's Damping Factor with a 4 ohm load connected to the 4 ohm transformer tap is 5.4; a Solid State amplifier (using the typical value I presented above) with the same load connected would have a DF of 40.
The formula to calculate is:
DF = Zload / Zsource
Manufacturers sometimes specify the DF but usually only testing reveals the values at various frequencies; manufacturers typically specify their DF value at 1 Khz (where the DF has no genuine useful function). So you might see DF specified at perhaps 200 or 500, but this is the 1 Khz value, not the value available below 100 Hz, where the benefit is supposed to be evident.
In theory, the Damping Factor helps the amplifier resist Back-Electromotive Force (Back-EMF) which again, in theory, should result in tighter control of the woofer by the amplifier. There are those who advocate high DF's as a benefit (particularly, these days, in Car Audio applications).
Since the SS amplifier has an inherent advantage in the DF specification, during the 1960's, when SS amps were battling for market share versus Vacuum State amplifiers, the SS manufacturers claimed that the DF of their amplifiers were an inherent advantage, resulting in "tighter" bass.
In practice this rarely was evident, and the SS amps of the 1960's are not known for particularly good sonic qualities in the first place ... Solid State didn't really come into it's own until the 1970's. That is why tube amps from the 60's are sought after items on the used market, while you can only sell SS amps of that vintage to the naive buyer of "Vintage" gear.
High Damping factors in Solid State amplifiers can be achieved by the Circuit Designer by increasing the amount of Global Negative Feedback, so in reality you can "Dial In" almost any DF you want in a SS amp (and high GNF improves all distortion specifications as well).
Since high GNF is associated with poor sonic qualities, you can use the DF specification as a kind of warning that the amplifier you are considering may not be the bargain it claims to be, despite the fact that it's specifications "blow the competition out of the water".