320 bitrate vs FLAC (distinguishable differences)?

P

PENG

Audioholic Slumlord
Irv is wrong about DBT testing if people are doing ABX testing, such as with Foobar's ABX Comparator module that is commonly used for testing lossy vs lossless music. ABX is a specific type of DBT. It is not designed to test preference, but whether or not you can reliably identify between two audio signals. It is not an evaluative test. It never asks you which is better. It plays A and B for you, and then X and asks you to identify whether X is A or B. If you want to identify preference, then you would do some additional/other kind of DBT A/B comparison.
I know what ABX and DBT are supposed to do. I said I was confused by you being confused (sort of jokingly), and I explained my reasoning in my post. My point was more about the logic, not the exact wording Irv used in his post. One of the reason I always respect Irv is that he impresses me as being logical, open minded and will admit it if he did make an error, in this case just the wording. Actually if he had skipped the word "only", there would have been no error to admit.:D
 
G

gzubeck

Audioholic
YouTube being the bottleneck for sound quality as well as the necessity of digitizing everything in order to upload it. I've seen the TT in the video, but what can I expect to hear?
not much...but i did notice the lps sounded smoother and more lifelike. its subtle but noticeable.
 
killdozzer

killdozzer

Audioholic Samurai
not much...but i did notice the lps sounded smoother and more lifelike. its subtle but noticeable.
Look, I don't want you to think I'm playing dumb, I did hear slight difference. I'm just saying I wouldn't draw any conclusions from that. It is recorded with a small home camera in order to be uploaded on the Internet. It is simply dubious what we hear. The display in that video (showing dB's) is probably more indicative than the sound itself.
 
Dan Madden

Dan Madden

Audioholic
Like many have said above, the recording, mastering and care that goes into a recording will have a vastly bigger impact as to how the recording sounds than the format it is played.

However on the flip side of this statement, I have the standard CD of Diana Krall's "Love Scenes" which for jazz lovers, is a fabulous recording. I also have the DVD-Audio version of the same recording in 24bit/ 96khz and I can honestly say that the DVD audio version takes what was already a fabulous recording into another dimension.

I don't otherwise have a lot of examples of a/b comparison of standard vs. Hi-def recordings but this is one of them. In most cases, I can hear a clear improvement when I'm listening to recordings in a higher bit and sampling rate.

However, a good recording is a good recording and I happily listen to them regardless of their resolution.
 
lovinthehd

lovinthehd

Audioholic Jedi
So this Krall CD has the same mastering between the 2.0 cd version and the 5.1 DTS DVD Audio version? I don't think so...the improved audio is from the re-mastering, not the frequency/bit rate.

Like many have said above, the recording, mastering and care that goes into a recording will have a vastly bigger impact as to how the recording sounds than the format it is played.

However on the flip side of this statement, I have the standard CD of Diana Krall's "Love Scenes" which for jazz lovers, is a fabulous recording. I also have the DVD-Audio version of the same recording in 24bit/ 96khz and I can honestly say that the DVD audio version takes what was already a fabulous recording into another dimension.

I don't otherwise have a lot of examples of a/b comparison of standard vs. Hi-def recordings but this is one of them. In most cases, I can hear a clear improvement when I'm listening to recordings in a higher bit and sampling rate.

However, a good recording is a good recording and I happily listen to them regardless of their resolution.
 
Y

yepimonfire

Audioholic Samurai
So this Krall CD has the same mastering between the 2.0 cd version and the 5.1 DTS DVD Audio version? I don't think so...the improved audio is from the re-mastering, not the frequency/bit rate.
And ironically, DTS is lossy, at 24/96, 1.5mbps is some pretty heavy compression considering an uncompressed 5 channel Pcm file would be 115mbps. That's nearly a 10:1 compression ratio, yet here he is praising it's better sound over the CD, further proving the point.

Sent from my SM-G360T1 using Tapatalk
 
Irvrobinson

Irvrobinson

Audioholic Spartan
And ironically, DTS is lossy, at 24/96, 1.5mbps is some pretty heavy compression considering an uncompressed 5 channel Pcm file would be 115mbps. That's nearly a 10:1 compression ratio, yet here he is praising it's better sound over the CD, further proving the point.
Dave's post doesn't prove anything, but your observation is pretty funny.
 
Y

yepimonfire

Audioholic Samurai
Like many have said above, the recording, mastering and care that goes into a recording will have a vastly bigger impact as to how the recording sounds than the format it is played.

However on the flip side of this statement, I have the standard CD of Diana Krall's "Love Scenes" which for jazz lovers, is a fabulous recording. I also have the DVD-Audio version of the same recording in 24bit/ 96khz and I can honestly say that the DVD audio version takes what was already a fabulous recording into another dimension.

I don't otherwise have a lot of examples of a/b comparison of standard vs. Hi-def recordings but this is one of them. In most cases, I can hear a clear improvement when I'm listening to recordings in a higher bit and sampling rate.

However, a good recording is a good recording and I happily listen to them regardless of their resolution.
24/96 is the standard for mixing and mastering for a reason, but it's not because you need a higher bit depth and sample rate to reproduce the audible frequency range with perfect precision. Bit depth is really a matter of dynamic range. Almost all music has some form of dynamic range compression except maybe classical or acoustic. Even voice spoken directly into a mic can have a large dynamic range, and many recordings may have tens of layers all being fed into the master input at once. A higher sampling rate insures that as little digital artifacts as possible show up in the recording. With digital instruments frequently used in modern recordings, the possibility of frequencies being generated far above the range of human hearing is a concern, at 44.1 this can introduce foldover distortion. Normally, microphones fail to respond past a certain point and an A/D converter would cut those off at 44.1khz so it's not a problem, but inside the computer there is no such limit. Secondly, a mix can have 20+ different plug ins processing the signal, each one potentially stacking one error on top of the next. This potential is significantly reduced by using a much higher sample rate and its the same reason your A/V receiver over samples the original signal before applying things like bass management, room correction, and matrix processing. So long as a recording has been properly mastered the final 16/44.1 track should sound identical to the 24/96 master.

There is an argument about higher sampling rates better capturing the separation and spatial depth of instruments and the sound stage but in order for this to work there has to be compensation for the ADC timing in the mastering studio, and preferably on the playback DACs end as well which is a problem MQA audio has claimed to have solved.
Sent from my SM-G360T1 using Tapatalk
 
Last edited:
Bucknekked

Bucknekked

Audioholic Samurai
Recording/mastering is key to the eventual playback quality, not so much the format if our lowest reference point is 320 kbps.
I don't remember if I have given this a two thumbs up before or not. IMHO, the single most important thing in how a recording will sound when re-produced on home audio equipment is the quality of the original recording in the studio. For a recording that was poorly engineered in the studio, it won't matter what word length and sampling rate was used. Its a fatal flaw. Your point PENG about seeking out recordings that were well done to begin with is often times lost in the bits n bytes discussions.

I am by no means an expert here. Many of the discussions in this area of reproduction are distinctions without differences. There are things that make a difference and the quality of the original material is one I can hang my hat on.
 
P

PENG

Audioholic Slumlord
I don't remember if I have given this a two thumbs up before or not. IMHO, the single most important thing in how a recording will sound when re-produced on home audio equipment is the quality of the original recording in the studio. For a recording that was poorly engineered in the studio, it won't matter what word length and sampling rate was used. Its a fatal flaw. Your point PENG about seeking out recordings that were well done to begin with is often times lost in the bits n bytes discussions.

I am by no means an expert here. Many of the discussions in this area of reproduction are distinctions without differences. There are things that make a difference and the quality of the original material is one I can hang my hat on.
I would love to hear from some recording/mastering experts who also are certified audioholics.
 
Bucknekked

Bucknekked

Audioholic Samurai
I would love to hear from some recording/mastering experts who also are certified audioholics.
There may be some. I usually steer clear of these kinds of discussions. They tend to turn in to "my way puts more angels on the head of a pin than your way", or, "your way of putting angels on the head of a pin doesn't matter because the angels sound the same either way". Or something like that:p
 
Y

yepimonfire

Audioholic Samurai
I'm not an expert but I've done a lot of recording, mixing, and mastering.

I can tell you in the majority of circumstances 24/96 masters sound no different to properly downsampled 16/44.1 masters. Recording and mixing is entirely different. A track that is recorded, mixed, and mastered using 16/44.1 or 24/44.1 will sound like crap when it's finished when compared to a recording done at 24/96 and then mastered to 16/44.1. I can also tell you that 24/192 is worse than 24/96. Analog circuits start behaving erratically at supersonic rates which can introduce intermodulation distortion that will end up in the final mix. Your mic may not pick up a 70khz tone but that doesn't mean something in the input chain isn't introducing it into the mix. There's also clock issues at sampling rates this high in even the best A/D converters.

Most of the issues with producing at 44.1 can be solved by oversampling the entire project and plug ins, so it's not the act of capturing 96khz that improves the sound. Resampling multiple times can introduce its own problems though so it's better to just start with 96khz, but if someone sends me something that's 44.1 you'd best believe I'm resampling it before I start mucking with it.

This is exactly why I related lossless/lossy compression to 24/96. You don't need all that extra discarded information for faithful reproduction of sound 24/96 captures a much more accurate waveform even in the audible range, but your DAC doesn't need all of that info to reproduce that waveform.
 
Last edited:
P

PENG

Audioholic Slumlord
Even voice spoken directly into a mic can have a large dynamic range, and many recordings may have tens of layers all being fed into the master input at once.
I assume you are one of those recording/mastering engineers I have been waiting to hear from. Funny you mentioned the dynamic range of voice. I have posted about a consistent type of distortion that I could hear in Adele's voice. It doesn't matter if it is on vinyl or CD, whenever she ramps up, I can always hear distortion at the end of a line of the lyric. I wonder if she can benefit from being recorded in 32 bit, or someone should tell her to stay further away from the mic. On the other hand, I seem to be the only one hearing her distortion.:D I guess the only way I can find out is to go to her live concert, but then the mic will still be in the loop, only the recording process will be bypassed.
 
Irvrobinson

Irvrobinson

Audioholic Spartan
I'm not an expert but I've done a lot of recording, mixing, and mastering.

I can tell you in the majority of circumstances 24/96 masters sound no different to properly downsampled 16/44.1 masters. Recording and mixing is entirely different. A track that is recorded, mixed, and mastered using 16/44.1 or 24/44.1 will sound like crap when it's finished when compared to a recording done at 24/96 and then mastered to 16/44.1. I can also tell you that 24/192 is worse than 24/96. Analog circuits start behaving erratically at supersonic rates which can introduce intermodulation distortion that will end up in the final mix. Your mic may not pick up a 70khz tone but that doesn't mean something in the input chain isn't introducing it into the mix. There's also clock issues at sampling rates this high in even the best A/D converters.

Most of the issues with producing at 44.1 can be solved by oversampling the entire project and plug ins, so it's not the act of capturing 96khz that improves the sound. Resampling multiple times can introduce its own problems though so it's better to just start with 96khz, but if someone sends me something that's 44.1 you'd best believe I'm resampling it before I start mucking with it.

This is exactly why I related lossless/lossy compression to 24/96. You don't need all that extra discarded information for faithful reproduction of sound 24/96 captures a much more accurate waveform even in the audible range, but your DAC doesn't need all of that info to reproduce that waveform.
What is your evidence that "even the best A/D converters" have "clock issues"?

I can't figure out what you're talking about WRT to "24/96 captures a much more accurate waveform even in the audible range"? What does that mean? Are you aware of why the industry went to higher sampling rates in the first place?
 
Irvrobinson

Irvrobinson

Audioholic Spartan
I assume you are one of those recording/mastering engineers I have been waiting to hear from. Funny you mentioned the dynamic range of voice. I have posted about a consistent type of distortion that I could hear in Adele's voice. It doesn't matter if it is on vinyl or CD, whenever she ramps up, I can always hear distortion at the end of a line of the lyric. I wonder if she can benefit from being recorded in 32 bit, or someone should tell her to stay further away from the mic. On the other hand, I seem to be the only one hearing her distortion.:D I guess the only way I can find out is to go to her live concert, but then the mic will still be in the loop, only the recording process will be bypassed.
A live concert? That won't do any good. You need to hear Adele live and unamplified (and unprocessed) in a small venue. Good luck with that. Just for starters, Adele uses a recording strategy like Enya; there is no live performance, she's just continually over-dubbing to add more instrumentation, most of which she plays herself.

16bit is more than sufficient to record a human voice, but deeper word depths can cover a lot sloppiness in the studio process.
 
P

PENG

Audioholic Slumlord
A live concert? That won't do any good. You need to hear Adele live and unamplified (and unprocessed) in a small venue. Good luck with that. Just for starters, Adele uses a recording strategy like Enya; there is no live performance, she's just continually over-dubbing to add more instrumentation, most of which she plays herself.

16bit is more than sufficient to record a human voice, but deeper word depths can cover a lot sloppiness in the studio process.
Yeah I figured that too, maybe I need to see her singing voice waves on a good spectrum analyzer, one with mic and amplified another without.:D If what I heard was real, then I am quite sure it is in the input side of any amplification process. So you don't think being too close to the mic is not a potential issue?
 
Bucknekked

Bucknekked

Audioholic Samurai
Yeah I figured that too, maybe I need to see her singing voice waves on a good spectrum analyzer, one with mic and amplified another without.:D If what I heard was real, then I am quite sure it is in the input side of any amplification process. So you don't think being too close to the mic is not a potential issue?
Peng,
So let me get this straight. you want to give Adele singing lessons? :p
:D
 
P

PENG

Audioholic Slumlord
Peng,
So let me get this straight. you want to give Adele singing lessons? :p
:D
Nope, just thought perhaps if she wouldn't mind staying a few inches back from the mic, her singing voice may become more musical to my ears. Regardless, I still buy her albums.
 
Bucknekked

Bucknekked

Audioholic Samurai
Nope, just thought perhaps if she wouldn't mind staying a few inches back from the mic, her singing voice may become more musical to my ears. Regardless, I still buy her albums.
Peng,
I apologize profusely. I just couldn't pass up the comment.
I have the right to remain silent, I just don't have the ability.........:)
 
cel4145

cel4145

Audioholic
I know what ABX and DBT are supposed to do. I said I was confused by you being confused (sort of jokingly), and I explained my reasoning in my post. My point was more about the logic, not the exact wording Irv used in his post. One of the reason I always respect Irv is that he impresses me as being logical, open minded and will admit it if he did make an error, in this case just the wording. Actually if he had skipped the word "only", there would have been no error to admit.:D
That's incorrect. The ABX test is designed--and thus by definition--to detect whether or not one can reliably tell difference. It is not about preference at all. So some DBTs are not about preference. So simply removing "only" doesn't work, and really the whole sentence needs to be reworked to be accurate.
 

Latest posts

newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top