As I understand it, there is a total of 105dB of dynamic range in film soundtracks. Theoretically when the wind is blowing in the right direction and the stars are properly aligned, 16 bit offers 96dB of dynamic range, while 24 bit offers about 144dB. Obviously, a full 105dB cannot be accurately represented without quantitization noise at 16 bit, but I've yet to see a film track actually use the entire range. The lowest levels measured in my DAW are around -70dB, which translates to about 35dBSPL on a properly calibrated system. The noise floor in the studio and the best cinemas are around 30dB, while most home theaters average around 40dB. In total, we only need about 75dB of dynamic range for film which can be easily represented by 16 bits, so why waste space using 24bit audio? I understand 48khz is necessary because of synchronization problems with 24p video at 44.1, but I don't get why 24bit was chosen, while engineers do have 105dB to work with, its no different than something like a classical music recording, which also has a dynamic range of about 105dB. No engineer is going to mix a sound below -70dB, because no playback environment including an acoustically treated studio has a low enough noise floor for sounds quieter than that to be heard.
Mixing in 24bit or even 32bit floating point makes sense, since any processing of the sound could inadvertently cause problems with quantitization errors, but why render the final master to disc at such a high bit depth? that's an additional 8 bits per sample, adding an additional 2.3mbps to a 5.1 track, that extra space could be utilized to provide better video compression.
Sent from my 5065N using Tapatalk