In article , Wally
wrote:
If I sample a 22.05KHz signal at 44.1KHz, what shape will I get when the
signal is rehydrated?
A violation of the Sampling Theorem, thus a failure to record as is
required for a meaningful and unambiguous result. :-)
The key point to bear in mind is that the sampling theorem actually
requires the sampling rate to be slightly *greater* than twice the
bandwidth for samples taken at regular intervals. Thus your question is
about a 'pathalogical' case which sits on the edge of violating the
sampling theorem. This is why you get the loss of information and ambiguity
problems you go on to describe in the rest of your posting. Alas, many
books and articles on this topic fail to make this critical point clear.
One way to look at this is as follows...
For a finite number of samples the required number can normally be taken to
be 2N+1 uniformly spread across the sampled waveform duration where the
'2N' value comes from the ratio of "2" that is usually quoted in casual
discussions. Note the "+1" which means you then satisfy the actual Sampling
Theorem requirements. Not just "2N". However with long durations and wide
bandwidth "2N" ends up so much bigger than "1" that the values 2N and 2N+1
become 'identical' for most purposes - but the difference remains as a way
to catch people out by confusing them with pathalogical examples and
paradoxes. :-)
To try and make that clearer in practical terms, take two (mono) examples.
One is a 1 second recording at CD-A rate, the other a 10 second recording
at CD-A rate.
To record 1 second you actually need to sample 44,101 instants. The first
at time = 0 (i.e. the start of the recorded duration) The 44,101'st sample
will be at time = 1 sec.
For a 10 second recording the same argument means you require 441,001
samples. Ten times as long, but not (quite) ten times as many samples. :-)
The longer the duration, the closer 2N and 2N+1 become. However even when
you ensure this, you still get a bandwith 'just less than' 22.05kHz by the
ratio 2N/(2N+1) for 'arbitrary' waveforms. Longer duration recordings can
actually get closer. This also follows from the meaning of 'frequency' in a
measurement context where the higher the precision we require, the longer
the duration of our observation has to be (for an 'arbitrary' waveform).
Also from the requirement that the recording be 'unambiguous'.
In practice, systems like CD-A use a fixed rate, but the *number* of
samples you get for a given duration is not in the 2/1 ratio you assumed.
There is one extra value. The precise practical/meaningful bandwidth also
depends upon the length of the recording, but the closeness of the approach
to 22.05kHz is so good for sensible durations that for most purposes
engineers don't normally have to worry about this. It only crops up in
places like the 'paradox' you describe below...
The information you record is about the whole pattern during the recorded
duration.
The above is a bit arcane, but I hope it makes sense... :-)
[If you are not careful, I'll also start explaining why "2N" values for
FFT's can still be OK despite the above... ;- ]
If the first sample occurs at the positive-going crossing point, then
the next sample will be at the negative-going crossing point, and so on,
resulting in a rehydrated wave of zero amplitude.
If the first sample occurs at the positive peak, then the next will be
at its complementary trough, and the rehydrated wave might mirror the
original signal.
If the first sample occurs somehwhere between a crossing point and
peak/trough, then the rehydrated wave will have the same frequency as
the original, but the amplitude will be reduced.
I might be wrong, but I don't see how a rate of two samples per
wavelength can guarantee to rehydrate the amplitude of the original
signal without assuming that the samples have been taken at the peaks
and troughs of the original. Granted, a few more samples per wavelength
might provide sufficient basis to infer the actual shape of the original
signal, but I still feel that it could be a bit iffy at certain
frequencies.
Isn't there also an assumption being made as to what shape the original
wave had (sine, triangular, etc)?
Nope. The sampling theorem is based upon requiring the sampled waveform to
have no details whatsoever outwith the bandwidth for which the Sampling
Theorem would be satisfied by the chosen sampling rate. Thus if the
'harmonics' that distinguish these waveshapes fall within the bandwidth
they should be recorded by the series of sample values. If any frequencies
are outside this bandwith, the waveform is not being correctly sampled.
Thus it is a strict requirement that the waveform to be sampled must only
contain power within the bandwith specified. If not, the recording becomes
ambiguous - i.e. distorted and cannot be correctly reconstructed without
other information.
Hope the above helps. Apologies if my explanation is clear as mud! :-)
Slainte,
Jim
--
Electronics
http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc
http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio
http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc.
http://www.st-and.demon.co.uk/JBSoc/JBSoc.html