• Thread starter Vaughan Odendaa
  • Start date
M

MDS

Audioholic Spartan
WmAx said:
I feel this may confuse some readers. It should be pointed out that the only relevance the number of samples have considering PCM audio, is the bandwidth. Therefor, 44.1kHz is perfectly sufficient for human perception, according to known credited perceptual research, since this extends the bandwidth to approximately 22kHz.

-Chris
Perhaps confusing, but how else to explain the difference between 'continuous' and 'discrete'? A continuous waveform has an infinite number of values between any two points. Digital sampling must chop up that interval into a number of discrete values. A 44.1 kHz sampling rate will chop up every second of a continuous analog waveform into 44, 100 discrete values. That is why I said that it is loosely true that sampling cannot perfectly reproduce an analog waveform.

However, as you are pointing out, based on the Nyquist theorem as long as we sample at 2x the highest frequency we want to capture we will faithfully reproduce the signal - as long as the reconstruction filter does a good job. It is a faithful reproduction to our ears, but obvioulsy not *exactly* the same as the analog waveform.
 
M

MDS

Audioholic Spartan
krabapple said:
But beyond a certain point, as demonstrated by the work of Shannon and Nyquist, it is unnecessary to capture 'more values' to digitally capture and reconstruct the original bandwidth-limited signal with utmost fidelity. For a bandwidth-limited signal -- a category that includes all audible sound -- it is not true that the more values we capture, the more accurate our digital copy of an analog wave will be. We need only capture at a sampling rate of twice the highest frequency.
Yes and No. It IS true that the more samples we capture and the higher the bit depth used, the more accurate the digital copy will be. What does not necessarily follow is that higher sampling rates and bit depths makes the result 'sound' better to our ears. That is due precisely to the fact to which you are alluding - that we as humans cannot hear beyond 20 kHz so why bother capturing the frequencies beyond that.
 
WmAx

WmAx

Audioholic Samurai
MDS said:
Perhaps confusing, but how else to explain the difference between 'continuous' and 'discrete'? A continuous waveform has an infinite number of values between any two points. Digital sampling must chop up that interval into a number of discrete values. A 44.1 kHz sampling rate will chop up every second of a continuous analog waveform into 44, 100 discrete values. That is why I said that it is loosely true that sampling cannot perfectly reproduce an analog waveform.
But, it will perfectly reproduce that analog waveform(anti-alias filter imperfections aside). The sample points are only one variable. When combined with the amplitude reference, the reconstruction of the sample points (if you want to think of it in this manner) is accomplished via a method that looks a lot like a spline curve, in it's ultimate result. This results in precisely reproducing the analog waveform(in time, frequency and amplitude) up to a frequency dictated by the Nyquist limit. There is no 'chopping' present in any of the final results as is popularly dictated on poor referecnes such as howstuffworks.com, unless the DAC is constructed without an anti-alias filter(Audio Note is one example of such poor design). The popular chopping visual is only the unfiltered raw sample frequency stepping.

-Chris
 
Last edited:
M

MDS

Audioholic Spartan
OK WmAx, another poor choice of words. I did not mean to imply that the resulting waveform will be choppy or appear to be a stair step. How about 'divide' the interval into discrete points in time. :)

It's easier for a layperson to understand using that analogy. The second hand of a clock 'chops' a minute into 60 slices. A 3 Ghz clock used by a CPU 'chops' up a second into 3 billion ticks. One second of audio is 'chopped' into 44, 100 discrete values if you are using a 44.1 kHz sampling rate. The reconstruction filter will then connect the dots (so to speak) into a continuous analog waveform that for all intents and purposes is 'perfect' to our ears.
 
WmAx

WmAx

Audioholic Samurai
MDS said:
OK WmAx, another poor choice of words. I did not mean to imply that the resulting waveform will be choppy or appear to be a stair step. How about 'divide' the interval into discrete points in time. :)
It's easier for a layperson to understand using that analogy. The second hand of a clock 'chops' a minute into 60 slices. A 3 Ghz clock used by a CPU 'chops' up a second into 3 billion ticks. One second of audio is 'chopped' into 44, 100 discrete values if you are using a 44.1 kHz sampling rate.
Yes. However, I tend to believe that this is confusing to the general person, because the examples given have no way to generate intermediate data. Some people believe this is the case with PCM, but it is not. Because of the relationship of the discrete 44,100 sample points per second in interaction with the amplitude reference, all information in the original waveform will be reproduced, up to the Nyquist limit. For a simple example: a 21kHz waveform can be perfectly reproduced by a 44.1 khz sample rate, because it is reproduced on a spline curve, in reference with the amplitude axis. It would otherwise not be possible to reproduce this with a 44.1 kHz sample rate without substantial distortion(s) because a 21kHz sine wave does not have an evenly distributed relationship with the 44.1kHz sample rate. If the chopped points had the effect as is popularized, reproducing a 21kHz signal accurately not be possible.

The reconstruction filter will then connect the dots (so to speak) into a continuous analog waveform that for all intents and purposes is 'perfect' to our ears.
The waveform will be perfect(aside from external circumstances such as imperfect anti-alias filters, jitter distortion, etc.) for all intents and purposes, not just "to our ears". I suppose it is fair to refer to this as "connect the dots", but the specific method used to connect the dots, does so perfectly within the Nyquist limitations. I know it may seem that I am nitpicking your response, but I fear that the general person will make an erroneous conclusion as to what digital audio can actually reproduce, thus perpetuating the nonsense popularized by certain audiophiles and audio engineers.

-Chris
 
avnetguy

avnetguy

Audioholic Chief
WmAx said:
I suppose it is fair to refer to this as "connect the dots", but the specific method used to connect the dots, does so perfectly within the Nyquist limitations.
With your 21kHz example I'd like to know how the proper amplitude is reconstructed with only 2 sample points per cycle?

Steve
 
M

MDS

Audioholic Spartan
I don't mind nitpicking WmAx, I know you understand the technical details.

One should be able to provide a simplified description of the process in a few sentences: 'Every 44, 100th of a second the amplitude of the waveform is examined and a value is assigned to represent the amplitude of the waveform at that point in time. The range of values depends on the bit depth. On playback, a reconstruction filter converts the stream of discrete values into a continuous (analog) waveform.' [If you'd like we can say it performs 'curve fitting' or linear interpolation which in layman's terms is exactly 'connecting the dots'].
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
MACCA350 said:
And the third generation digital copy is still the same as the original etc. etc. etc:D

cheers:)

So will the nth reproduction, most likely:D
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
MDS said:
The reconstruction filter will then connect the dots (so to speak) into a continuous analog waveform that for all intents and purposes is 'perfect' to our ears.

And to our eyes looking at a scope as well, perfectly.
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
avnetguy said:
You'll always find some that like the sound of analog over digital,
avnetguy said:
Yes,:D Our preferences are strange at times to others.

maybe they'll say something is missing or is has a different sound that they don't like but for the vast majority CD quality audio passes with flying colors as far as reproduction goes.

Yes, they can sound different or the same, if the vinyl is copied to a CD. So, is that the fault of the digital medium? :D Actually is shows the opposite that CD can sound the same as vinyl.

Now having said that, its time for me to open a can of worms :),

How do you drink that stuff? LOL :D


In order to "properly" represent an analog waveform, this is not just the retention of the frequency (determined by the nyquist) but also a realistic representation of the amplitude, one must have a sampling rate 10 times higher than that of the minimum frequency digitized.

For example, would a 14 kHz sine wave digitized at 44.1 kHz be truely represented by only 3 data points?

But getting back to the real world ... would we ever hear the difference due to the limitations of our own ears?

Steve


Yes, you are correct, this is a can of worms as it is not correct.
At least two sample are all that is needed, proven time and time again. Or, the reconstruction would show an awful picture. It doesn't.
 
avnetguy

avnetguy

Audioholic Chief
mtrycrafts said:
Yes, you are correct, this is a can of worms as it is not correct.
At least two sample are all that is needed, proven time and time again. Or, the reconstruction would show an awful picture. It doesn't.
If this is the case then, again, how do you make sure you get the maximum amplitude of a 21kHz sine wave with only 2 sample points per cycle? Your "chances" of catching the peaks (or near to) is pretty slim isn't it. Linear interpolation certainly isn't going to fix this. :) I'm not disputing the fact that the frequency will be retained, just the amplitude. BTW, the reconstruction wouldn't show an awful picture, just one with an incorrect amplitude.

Steve
 
jonnythan

jonnythan

Audioholic Ninja
Remember that each sample has 16 bits of resolution... it can't be *that* hard to draw the wave in between the dots, because it's obviously been done for a long time.
 
M

MDS

Audioholic Spartan
The max amplitude of a sine wave will get the maximum bit value. A sine wave is simple - music is not. Music is a complex waveform that is the sum of a (potentially) infinite number of sine waves.
 
WmAx

WmAx

Audioholic Samurai
avnetguy said:
If this is the case then, again, how do you make sure you get the maximum amplitude of a 21kHz sine wave with only 2 sample points per cycle? Your "chances" of catching the peaks (or near to) is pretty slim isn't it. Linear interpolation certainly isn't going to fix this. :) I'm not disputing the fact that the frequency will be retained, just the amplitude. BTW, the reconstruction wouldn't show an awful picture, just one with an incorrect amplitude.

Steve
It is greater than 2 sample points per waveform frequency, as required by the Nyquist theorem. Refer to the Lavry Engineering document that I linked which covers this subject in detail, starting around page 23.

Unfortunately, my combination of computer hardware and software available at the moment can not render a correct 21 kHz signal file. I am limited to 20kHz. At 44.1kHz, this is 2.205 samples per 20 kHz waveform as opposed to the 2.1 samples per waveform of 21 kHz. Since you claimed earlier that 10x the sample rate of a frequency is required in order to properly reproduce a given frequency, and even questioned if 14kHz could be properly reproduced, a 20kHz signal should suffice for this simple applied example, since it comes no where near the minimum (10x) rate that you specified was required. The following graph is the Adobe Audition 1.5 representation of the waveform of an actual 20kHz signal played from a 44.1kHz file and simultaneously recorded. The dots represent the actual sample/timing points, and the line is the simulation performed by Adobe Audition of how an ADC will reproduce this waveform based on the available data points. It's not convenient for me show the output on a scope at the moment, but I have done so in the past, and it is exactly as Adobe Audition calculates, thus exactly as sampling theory dictates.

http://www.linaeum.com/images/20khz_play_record.gif

-Chris
 
Last edited:
V

Vaughan Odendaa

Senior Audioholic
My manager is an audio engineer who has been in the audio field for 30 years. He has a boatload of certificates as well. Now we had a discussion a few days ago and he tells me that digital simply loses information whereas analog preserves that information. You get everything on the disk and you hear everything on the disk, as it were.

He also said that oversampling can not reproduce the signal without having problems of it's own. He was directing pot shots at oversampling and supersampling techniques. Now he is quite old, and he will definitely not post on this forum. I don't think he has ever posted on a forum ever in his life. I want to learn and understand this so that I can rebut his points correctly and so that I can understand this subject for myself.

I need to understand it. I haven't bought any books on this (I'm more speaker design, enclosure design, room acoustics-kind of guy), but I need to start somewhere. I am going to read that thread that WmAx posted. Thank you !

And thanks for participating. I appreciate it and that goes to everyone else as well. Thanks.
 
avnetguy

avnetguy

Audioholic Chief
WmAx said:
It is greater than 2 sample points per waveform frequency, as required by the Nyquist theorem. Refer to the Lavry Engineering document that I linked which covers this subject in detail, starting around page 23.

Unfortunately, my combination of computer hardware and software available at the moment can not render a correct 21 kHz signal file. I am limited to 20kHz. At 44.1kHz, this is 2.205 samples per 20 kHz waveform as opposed to the 2.1 samples per waveform of 21 kHz. Since you claimed earlier that 10x the sample rate of a frequency is required in order to properly reproduce a given frequency, and even questioned if 14kHz could be properly reproduced, a 20kHz signal should suffice for this simple applied example, since it comes no where near the minimum (10x) rate that you specified was required. The following graph is the Adobe Audition 1.5 representation of the waveform of an actual 20kHz signal played from a 44.1kHz file and simultaneously recorded. The dots represent the actual sample/timing points, and the line is the simulation performed by Adobe Audition of how an ADC will reproduce this waveform based on the available data points. It's not convenient for me show the output on a scope at the moment, but I have done so in the past, and it is exactly as Adobe Audition calculates, thus exactly as sampling theory dictates.

http://www.linaeum.com/images/20khz_play_record.gif

-Chris
Great image for this discussion WmAx.

Now lets look at the dots (samples) of the sine wave you displayed, particularly the 2nd, 3rd and 4th cycles of the sine wave. On those cycles we see that the actual maximum amplitude is not recorded by the ADC so my question is, how is it properly reproduced? Does the output filtering reconstruct the sine wave for the 2nd, 3rd and 4th cycles as well as it does for the first cycle?

Steve
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
Vaughan Odendaa said:
My manager is an audio engineer who has been in the audio field for 30 years. He has a boatload of certificates as well. Now we had a discussion a few days ago and he tells me that digital simply loses information whereas analog preserves that information. You get everything on the disk and you hear everything on the disk, as it were.

He also said that oversampling can not reproduce the signal without having problems of it's own. He was directing pot shots at oversampling and supersampling techniques. Now he is quite old, and he will definitely not post on this forum. I don't think he has ever posted on a forum ever in his life. I want to learn and understand this so that I can rebut his points correctly and so that I can understand this subject for myself.

I need to understand it. I haven't bought any books on this (I'm more speaker design, enclosure design, room acoustics-kind of guy), but I need to start somewhere. I am going to read that thread that WmAx posted. Thank you !

And thanks for participating. I appreciate it and that goes to everyone else as well. Thanks.
Forget it. You will never convince him. His has no immunity from being wrong, period.
 
WmAx

WmAx

Audioholic Samurai
avnetguy said:
Great image for this discussion WmAx.

Now lets look at the dots (samples) of the sine wave you displayed, particularly the 2nd, 3rd and 4th cycles of the sine wave. On those cycles we see that the actual maximum amplitude is not recorded by the ADC so my question is, how is it properly reproduced?
At one stage in an ADC, the sampled points are output as stepped voltage pulses(not shown in the picture I linked) of variable magnitude. The process of anti-alias filtering restores the original waveform from this state.

Refer to the Lavry Engineering [1]paper that I have referenced several times in this thread so far. It explains theoretical and applied reconstruction in detail, along with illustrated examples.

-Chris

[1] Sampling Theory For Digital Audio
Lavry, Dan
http://www.lavryengineering.com/documents/Sampling_Theory.pdf
 
avnetguy

avnetguy

Audioholic Chief
WmAx,
I did read the article but it doesn't touch on the amplitude issue I mentioned. While his illustrated examples are nice, an actual DSO capture showing the amplitude in and out of the sine wave would tell the story perfectly. I wish I still had the equipment around to test this myself but I got out of that game many years ago, maybe I can figure something out with the hardware I have laying around now. Again, let me state that I don't believe this is really needed as our ears probably can't detect the small amplitude differences at those frequencies, I just don't see it being a so called "perfect" reconstruction if one were to view the output on a scope.

Steve
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
avnetguy said:
Again, let me state that I don't believe this is really needed as our ears probably can't detect the small amplitude differences at those frequencies, Steve

The ABX DBT protocol calls for level matches to 0.1 dB spl. That is because no one can detect such small differences:D About 0.25dB is the smallest with sensitive test tones but music is higher or other non test tones.:)
 

Latest posts

newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top