Digital Converter Comparison = Pro Audio Snake Oil?

P

Press Record

Enthusiast
Hi, first time poster here. I wanted to get input from the other side of the aisle so to speak. I am a musician who knows much more about the process of recording and mixing than about audio electronics.

Black Lion Audio is a company that modifies existing pro audio equipment such as microphone preamps and converters with the promise of improved performance.

This video below demonstrates a sonic comparison between a stock Digidesign ADA converter and Black Lion Audio modified Digidesign ADA converter. It appears from the video that he is only comparing the DA of both converters.

http://www.gearwire.com/digidesign-digi002-modification001.html


According to the video the signal chain going into the converter is the same.

When listening, there is no doubt that an audible difference exists between the two clips he plays. But the first track also sounded much louder than the second. I used my audio editing software to analyze the RMS on both clips and it turns out that the first clip (modded device) is over 6 DBs louder than the second clip (standard device).

This would appear to me to be misleading at the very least and maybe even deceptive since we all know that by simply making something louder it will sound subjectively more detailed, defined, have a better sounding low end etc. Can anyone with more audio electronics smarts than me listen to the video and let me know what your opinions are? Be sure to listen to the comments on the mods and how they affect the sound.

Also, in my experience there is no reason to think that recording engineers are any less susceptible to manufacturer claims than AV enthusiasts.
 
F

fmw

Audioholic Ninja
Yes, you are correct. It is misleading to put up audio samples without level matching. People will prefer the louder one every time, even if there is no audible difference when level matched.

Let me share a few things. In the first place DAC's are DAC's and ADC's are ADC's. They all produce results that are audibly the same if not digitally the same. Any audible differences would have to be engineered into an analog stage of the equipment. In other words, something would have to be done to the analog circuitry to modify the frequency response or add harmonic distortion (vacuum tubes) or something to make the audio stage of the equipment operate in a non-linear fashion. You can't change the "sound" in the converter chip itself.

I don't need to tell you that you can modify the frequency response of the audio chain very easily with an equalizer and you can put a tube premp in the audio chain to add harmonic distortion if you like. It isn't necessary to modify the circuitry to do that and, in fact, it is downright stupid to modify the circuitry to do that because the modification isn't adjustable or defeatable like an EQ would be.

Are mods snake oil? Well, not if they make a modification in the basic performance of the equipment. You are paying for a modification and if that's what you get, then that isn't snake oil. Are mods a good idea? Certainly not for the reasons mentioned above.

Why would you take a competently designed and made product and then modify it so that it no longer meets the design specifications? No reason that I can think of. If you want to alter the sound of things, that's fine but circuit modifications aren't the way to do it in my opinion.
 
P

Press Record

Enthusiast
"Let me share a few things. In the first place DAC's are DAC's and ADC's are ADC's. They all produce results that are audibly the same if not digitally the same. Any audible differences would have to be engineered into an analog stage of the equipment. In other words, something would have to be done to the analog circuitry to modify the frequency response or add harmonic distortion (vacuum tubes) or something to make the audio stage of the equipment operate in a non-linear fashion. You can't change the "sound" in the converter chip itself."

FMW, thanks for your response. I understand about the non-linear modifications to the anologue circuitry that can be employed to make the device sound different. My question lies in the differences that people, at least, claim that they can hear due to the differences in analogue circuitry designed to be linear. An example would include different types of op-amps used, discrete designs versus ICs and a host of other topologies. Another claim is that the power supply makes a sonic difference. Yet another is modifications to the clock of the device. Below are exerpts from Black Lion's own website. Notice they do not agree with your statement that an ADC is an ADC and does not effect sound.

"(The un-modified Digi002) Line stages are based around ST Microelectronics' version of the TL072 and TL074, and JRC's NJM4580 and NJM5532. Personally, I find the TL07x family to be a little on the noisy side, and I don't care what the 4580's datasheets claim, they're NOT low noise either."

"While experimenting with clock designs, we found that we liked the quality of the audio better when the master clock frequency was increased. Due to the superior nature of third overtone XO crystal oscillators (they have inherently less jitter than fundamental XO oscillators), we decided to increase the 002's original master clock frequencies, and then divide them down as necessary. We started with a pair of ultra-low jitter (1 picosecond average) XO oscillators, one for 44.1 kHz and its multiples, and one for 48 kHz and its multiples. We divide these two frequencies using a proprietary method that keeps accumulated jitter to a bare minimum: under 10 picoseconds. Would you ever use the word "punch" when describing the sound of your 002? With our internal clock, you most certainly will."

"In late 2006, we began experimenting with linear power supplies in the 002. We've never been big fans of switch mode supplies in audio, mostly because of the way they seem to strip the audio of any real body or impact. By powering the analog and digital stages separately, we felt we could improve overall sonic character as well as completely eliminating the power supply as a source of crosstalk between analog and digital. By using a low-emission toroid with a bigger power supply rail, we are increasing headroom, and as a result, dramatically improving sonic impact."

"We also toyed with different converter configurations. Converters are very sensitive to their power sources, and proper decoupling is very crucial. We began to experiment with some theories concerning resonant frequencies within power supplies and their overall impact on conversion. After almost six months of testing, we came up with a proprietary method for reducing the noise and decoupling the 002's converters. We feel the end result represents a great breakthrough, and takes the 002 to a completely new level of sonic quality."
 
F

fmw

Audioholic Ninja
I wouldn't expect them to agree with me. And obviously, I don't agree with them. I hope you aren't one of those people who says something is true because you see it in print.

You see, I spent quite a while back around the turn of the century trying to determine whether clock jitter and other things are audible or not. I discovered that clock jitter is not audible or at least is not so in music reproduction equipment. Not surprisingly, many others have done the same testing and have reached the same truth I did. The audibility of jitter was debunked years ago.

You made some comments about things that people report are audible that may or may not be. Some of these things are ingrained in people's beliefs because of what they read and hear from others. If there is little or no audible difference between two audio products, then people will substitute bias or what we call perceptual hearing. Without bias controlled testing people will hear all kinds of subtle audible differences that arise from their brains rather than from the equipment. Read up on placebo effect and blind listening tests.

After reading your exerpt from the web site then let me change one comment I made earlier. I said mods of this type aren't necessarily snake oil. This one is definitely snake oil. Pure, unadulterated, bias and belief. Technobabble. Nonsense. Sorry. I don't question that the fellow believes in what he does but I would suggest he get some bias controlled testing under his belt.

Master clock frequency has nothing to do with audibility (or jitter for that matter.) Power supply rails have nothing to do with headroom as long as the circuitry gets enough current. "Decoupling" a power supply from an ADC or DAC chip has nothing to do with anything. Crosstalk isn't caused by power supplies or transmitted by them. "punch?" "sonic impact?" "body?" Give me a break. The consumer audio industry has magazines full of this kind of babble every month. One always hopes that audio pros would avoid this kind of stuff but obviously not all of them do. Many fall prey to the same nonsense high end audiophiles do in the consumer end of the business.

I do think the DigiDesign 002 is a fine unit. It is quiet and accurate. And, as you know, Pro Tools is an industry standard in sequencing, production and recording software. If you have an 002, you are equipped to make recordings that are as good as anybody's. No need to go any further with it if it meets your requirments. Spend your time working on mic selection and placement, room acoustics, isolation and things that will make a difference in your recording quality. Chasing clock jitter won't get you anywhere. I promise you. Been there. Done that.
 
MidnightSensi

MidnightSensi

Audioholic Samurai
A good quality ADC and DAC is going to sound about the same. Main thing to look for in those units are nice, sturdy inputs and outputs. The nice stand-alone ones are still fairly expensive even though they aren't that complex.

Yes, you are correct. It is misleading to put up audio samples without level matching. People will prefer the louder one every time, even if there is no audible difference when level matched.
Right, even if the mixer is in the three notches in the red, the amps are running on the rails and the bass bins sounds like wind machines. They still like louder.
 
jliedeka

jliedeka

Audioholic General
There are things that can change sound. If parts don't have tight tolerances, the difference may be audible.

Could someone point me to information about jitter audibility being debunked? I smelled a rat when Robert Harley went on about it but I'd like something more factual to go on.

Jim
 
MidnightSensi

MidnightSensi

Audioholic Samurai
There are things that can change sound. If parts don't have tight tolerances, the difference may be audible.

Could someone point me to information about jitter audibility being debunked? I smelled a rat when Robert Harley went on about it but I'd like something more factual to go on.

Jim
How tight of a tolerance do you think a DAC should have? Small enough so the electrons can move?

Jitter is for people that rub peanut butter on their face and read Stereophile. ;) The servo controlling disc speed and the oscillator controlling the buffer may get out of wack in terms of pico-seconds, but it doesn't change what is being processed and the problem is so miniscule compared to the rest of the chain (source data, loudspeaker quality, acoustics quality, and so on) that it just isn't worth worrying about. Now if a part is bad, it might be an issue that could actually be heard.... but then you'll just know your CD player is bad and replace it. Another work around is to just copy the CD to a hard drive which has constant spindle speed. I know that isn't the answer you are looking for, but honestly I haven't found any of it to be applicable in most sound reinforcement or home audio applications.
 
jliedeka

jliedeka

Audioholic General
Sorry, throwing stones at people doesn't make them adulterers. I'm of the school of thought that there are scientifically explainable reasons for the quality of sound reproduction. I'm looking for data or at least someone's careful analysis of well collected data.

Jim
 
M

MDS

Audioholic Spartan
There are things that can change sound. If parts don't have tight tolerances, the difference may be audible
Jitter is for people that rub peanut butter on their face and read Stereophile. ;) The servo controlling disc speed and the oscillator controlling the buffer may get out of wack in terms of pico-seconds, but it doesn't change what is being processed and the problem is so miniscule compared to the rest of the chain (source data, loudspeaker quality, acoustics quality, and so on) that it just isn't worth worrying about.
I can't give any specific references off the top of my head but it is a topic I've read extensively about and I agree in principle with what MidnightSensi said.

Jitter is a timing error and in the past I've given a layman explanation of the concept. First and foremost, jitter is inherent in *all* digital systems - it cannot be eliminated entirely. It's a contentious topic but mostly falls along the lines of the subjectivists vs the objectivists (like most things audio related). There are well known and respected mastering engineers that swear they can hear the affect of the jitter but there are also people that swear they can detect .1 dB difference in loudness when it has been proven that we cannot.

A typical CD player will exhibit jitter on the order of 20 -30 picoseconds. That is 20 trillionths of a second. In my mind, that is much ado about nothing especially considering that I've seen studies that put the minimum audibility of changes between frequencies at around 6 milliseconds.

If you rip a CD and the drive is slightly inaccurate (a thing of the past) and the rip is off because it picks one block too early or one block too late, that is 1/75 of a second (588 samples). You cannot hear that difference so why worry about jitter that is several orders of magnitude lower than that?
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
Could someone point me to information about jitter audibility being debunked? I smelled a rat when Robert Harley went on about it but I'd like something more factual to go on.

Jim
What does Harley has to say about it beyond that it is bad?:D

Here you go, one paper on jitter audibility.

http://www.jstage.jst.go.jp/article/ast/26/1/50/_pdf

No one detected in this experiment jitter at 250ns, that is huge amount today when stuff is in the psec range.
As to Harley, I'd question him on the time of day he gives;):D

Here is another one

Benjamin, Eric and Gannon, Benjamin ' Theoretical and Audible Effects of Jitter on Digital Audio Quality,' 105th AES Convention, 1998, Print 4826

Similar results, needs a huge amount to be audible.
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
...When listening, there is no doubt that an audible difference exists between the two clips he plays. But the first track also sounded much louder than the second. I used my audio editing software to analyze the RMS on both clips and it turns out that the first clip (modded device) is over 6 DBs louder than the second clip (standard device).

...
That is a huge level difference that would have to be level matched to a 0.1dB spl in a good DBT analysis;)
Perhaps they want to impress by volume differences? You can do that with the master volume knob, for free:D
Also, it would be good if they give some measured differences in FR response, THD+N, etc.
 
MidnightSensi

MidnightSensi

Audioholic Samurai
Sorry, throwing stones at people doesn't make them adulterers. I'm of the school of thought that there are scientifically explainable reasons for the quality of sound reproduction. I'm looking for data or at least someone's careful analysis of well collected data.

Jim
I didn't mean to come off coarse mate, I come off horrible on the Internet.
 
F

fmw

Audioholic Ninja
One bugaboo we face in the digital recording world is latency. It relates to the time the computer and data ports need to process audio and data. You can strike a note on a midi controller and not have it become audible for 1/4 second, as an example. Or you can overdub and have the recorded track timed behind the monitored track. Latency can be reduced (like jitter) but not eliminated. Recording engineers go through hoops to try to get it as low as possible.

Most recording engineers agree that, 10 or 20 milliseconds of latency is low enough not to worry about. Achieving single digit millisecond latency is akin to finding the holy grail. Imagine them worrying about something you measure in picoseconds.

Having said that, however, there is a market for master clocks in the recording business. While I've only tested jitter in two channel audio, I suppose it is possible that a live recording of a symphony orchestra with 30 or 40 mics could stack up enough jitter to become audible. I don't know. I doubt it but I don't know. As I said, some of these guys do buy and use master clocks in their recording toolbox. I don't read a lot of controversy about them. I also don't read much about when and how to implement them.
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
...A typical CD player will exhibit jitter on the order of 20 -30 picoseconds. That is 20 trillionths of a second. ?
Then, Harley is barking up the wrong tree, as usual:D But then, he likes to do that for some odd reason, fame?
The citation I posted shows 250nanosec, 10,000X more, is not audible:D
 
P

Press Record

Enthusiast
Having said that, however, there is a market for master clocks in the recording business. While I've only tested jitter in two channel audio, I suppose it is possible that a live recording of a symphony orchestra with 30 or 40 mics could stack up enough jitter to become audible. I don't know. I doubt it but I don't know. As I said, some of these guys do buy and use master clocks in their recording toolbox. I don't read a lot of controversy about them. I also don't read much about when and how to implement them.
Actually there is a lot of controversy regarding external clocking improving sound in the recording community. Many engineers buy an external clock with the belief that it will improve the timing accuracy of their converters and improve the sound. This is a myth. The bottom line is that the most likely result of slaving a given converter's internal clock to an external clock source is more jitter. (The reason is that the internal clock must continually adjust itself via a phase locked loop circuit to the incoming master clock source, thereby tending to increase timing internal timing irregularity -- jitter.)

When syncing multiple converters is necessary, many folks will just use one of the converters as the master clock source, so as to have minimal jitter in at least one of the converters. However an external, standalone clock source may have more advanced routing capabilities and flexibility and may be more practical and easier to deal with in a large installation with many converters.


Digidesign published a white paper which goes into more detail:

http://www.digidesign.com/index.cfm?navid=48&itemid=23613&langid=100
 
F

fmw

Audioholic Ninja
Actually there is a lot of controversy regarding external clocking improving sound in the recording community. Many engineers buy an external clock with the belief that it will improve the timing accuracy of their converters and improve the sound. This is a myth. The bottom line is that the most likely result of slaving a given converter's internal clock to an external clock source is more jitter. (The reason is that the internal clock must continually adjust itself via a phase locked loop circuit to the incoming master clock source, thereby tending to increase timing internal timing irregularity -- jitter.)

When syncing multiple converters is necessary, many folks will just use one of the converters as the master clock source, so as to have minimal jitter in at least one of the converters. However an external, standalone clock source may have more advanced routing capabilities and flexibility and may be more practical and easier to deal with in a large installation with many converters.


Digidesign published a white paper which goes into more detail:

http://www.digidesign.com/index.cfm?navid=48&itemid=23613&langid=100
Thanks for that info. I have yet to even talk to an engineer that has an external master clock so I haven't encountered the controversy. I have no problem believing it is a myth as you say. The audio business is loaded with myths, as we all know.
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
... The audio business is loaded with myths, as we all know.

Hey, that is a business by itself, peddling mythology, urban legends and voodoo be it in audio, homeopathic meds, etc:D
 
J

JimWest

Audiophyte
Secondary Artifacts from Jitter

Here's some musician input, in response to MyTryCrafts.

===MyTryCrafts writes===
Here you go, one paper on jitter audibility.
www dot jstage dot jst dot go dotjp/article/ast/26/1/50/_pdf
No one detected in this experiment jitter at 250ns, that is huge amount today when stuff is in the psec range.
As to Harley, I'd question him on the time of day he gives :)
===

Studies can be technically correct and informative, but not valid per se as a general statement in real life scenarios.

While technically, 250ns may not be perceptive in a narrowly defined test, the nanoseconds can produce audible secondary artifacts and -- contribute to a lack of intimacy, which makes a profound performance or hearing impossible.

I've noticed that performances (on Broadway) that run through digital systems are disjunct between the musicians and singers. This perception is not easily measurable, but it is a sense of neurotic isolation between performers and between performers and audience. In other words, there is a complete lack of real-time intimacy that is often profound in acoustic scenarios and less often profound in electronic/analog scenarios.

These problems are mostly obviously due to latency (not so much jitter, I assume) within digital compression, echo, routing, etc. But technicians will still argue that latency is not a problem while referring to narrowly constructed tests. Arguments against jitter (as a problem) are similarly, technical rationalizations, so I suspect. So I advocate, the less jitter, the better.
 
B

businessjeff

Junior Audioholic
a little late to the "punch", lol, but WHAMO!!!


http://en.wikipedia.org/wiki/Psychoacoustic




I think what most need to consider is the law of deminishing returns, how much money are you spending for what really. With that said, I would focus on comparing DA's that simple spec out higher in terms of what it can do. How many channels does it support, what bit-rate does it support at how many samples, impedance matching, power supply. Those are what matter, anything that gets into such minuet in audible differences in sound arent going top be worth the money.


I dont know much about this but I try to keep one fundamental rule in mind when comparing an entire audio setup. How well does it represent the original waveform, and remember that will always be subjective from person to person because of psychoacoustics. So if a company is given 2 sound clips and saying here which one sounds better, its not different than someone handing you 2 pieces of cake and asking the same thing.

Now which one, is more nutritious or which has more calories is something that you cannot figure out by being handed 2 pieces of cake, same with 2 pieces of audio.
 
newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top