V

vgmbpty

Enthusiast
Was wondering if somebody had the experience to compare AVRs with these two specs under similar conditions and if there was a real difference that could be heard? (Burr-Brown DACs).

As usual, specs are being used to sell more, but i wonder if they are already past what a normal human can actually hear.

It reminds me the pixel race in picture cameras. Unless you are making prints the size of a door, 8Mp is more than enough for most, even less in many cases.
 
V

vgmbpty

Enthusiast
OK, over 80 views and no answers makes me thing this difference in bits may not very relevant to the sound.
 
M

MDS

Audioholic Spartan
I don't think there are any sources that are 32/192 and 24/192 is only found on SACD and DVD-A discs which are pretty much obsolete now. Without looking it up, I don't know the bit depth and sample rate of the newer lossless audio formats such as Dolby HD or DTS-MA.

Higher bit depths and sampling rates are useful for capturing a more accurate representation of the analog waveform when sampling from analog to digital but the difference can be subtle to nonexistent to many ears. Higher sampling rates have the effect of pushing quantization noise (artifacts from the sampling process) into higher frequencies well beyond the range of human hearing and that can make the high end slightly smoother and less gritty (and of course golden ears can always hear that as an improvement).

Nearly all mid to high end receivers process the audio using either 24 or 32 bits. The reason is that a lot of processing involves mathematical operations and processing at a lower bit depth can lead to overflow which changes the sample value and changes the sound. For example, if multiplying two 16 bit numbers, the result may not fit in 16 bits. You can look up 'aliasing' as part of analog to digital conversion to get a better understanding of it.

In short, higher resolution audio has the potential to sound better but is not always obviously better. I wouldn't worry about whether a receiver has 24/192 or 32/192 dacs because either is more than necessary for most sources and you'd be hard pressed to find any receiver with less than 24/96 dacs anyway.
 
s162216

s162216

Full Audioholic
To my knowledge the biggest bit depth that any current audio codecs use is 24 bits. Past that and all your going to do is get a massively bigger file for very little to no perceivable increase in sound quality.

It is like the so called 'mega pixels race' as camera resolutions have been increasing every year despite them using mostly the same sized sensors, so noise and processing time increases pointlessly. Most high end compact cameras like the Canon Powershot series are actually decreasing the amount of megapixels to get better pictures with less noise and a faster processing time. Cheap ones are still forging ahead using increasingly more megapixels.
The Powershot G10 was 14mp for instance and the G11 and G12 have now decreased this to 10mp and as so have got better images in tests with much less noise and comparable detail. All you actually need for a 10"x8" print is about 8mp anyway
 

Latest posts

newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top