In article , John Phillips
wrote:
On 2006-10-29, Geoff wrote:
Jim Lesurf wrote:
In fairness, I should point out, though, that the first generation
Philips '14 bit' chipsets for CD players actually used x4
oversampling. Thus - in principle at least - returned 16-bit
resolution.
Pray tell how oversampling increases resolution ? The reason for
oversampling was/is to make reconstruction filters easier to implemnt
without artifiacts of a steep slope. It's been a whil, have I
forgotten ?
I have sometimes wondered about the Philips x4 upsampling DAC in early
CD players (I use "upsampling" here to distinguish from the use of
oversampling in the ADC case).
I'd prefer to call it 'oversampling' in both cases for various reasons. One
being that in some situations 'upsampling' may be a distinctly different
practice.
I assume (but have never looked for proof) that the conversion of a
single 16-bit sample xx..xxYY (YY are the two LSBs) would be
accomplished by replacing the single 16-bit sample by four 14-bit
samples as follows:
xx..xx00: xx..xx, xx..xx, xx..xx, xx..xx
xx..xx01: xx..xx, xx..xx, xx..xx, xx..xx+1
xx..xx10: xx..xx, xx..xx, xx..xx+1, xx..xx+1
xx..xx11: xx..xx, xx..xx+1, xx..xx+1, xx..xx+1
Or something similar. The DAC will effectively interpolate so the LSBs
are not lost. The noise floor will be right for 16 bits because of the
upsampling.
I wonder if the amplitudes of the preceding and succeding samples should
be taken into account to determine the right order of the +1s in the
interpolation? Probably not as I suspect the spectrum differences will
fall above the original Nyquist limit.
The above is essentially the same explanation that I would have given,
but since John puts it quite neatly, I need not bother. :-) A more
detailed explanation is given in the special issue of Philips Tech Rev
that was released at the same time as CD audio was launched, and
describes CD audio and the initial chipsets.
The samples are 'noise shaped'[1] by a process along the lines that the top 14
bits of each sample are DAC converted and fed out as an analog level, and
the 'unused' 2 LSB are fed back and combined with the next sample value.
The simplest method is the one described above, but alternative feedback
shaping processes can be used.
The output filter then acts to take a 'running average'. Four 14 bit values
then sum or average to give a 16-bit result in the passband of the analogue
filtering arrangement.
In principle, the behaviour is the same as when any 'low bit depth' DAC is
used (with oversampling and noise shaping) to get results with higher
depths.
Thus by using oversampling and noise shaping we can symultaneously ease the
burden on the analog reconstruction filter that follows DAC conversion, and
allow the use of a DAC with less than 16 bits. This also is the basis of
other methods like low-bit DAC delta-sigma, 'bitstream', and various other
commercial techniques which use the same general approach to obtain both
a shift of reconstruction images to higher frequencies (thus easing analog
filter requirements) and obtaining high resolutions.
Hence the original Philips 14-bit x4 oversampling system would be able,
in principle, to deliver full 16-bit resolution *if* the chips and the
associated electronics was made with suitable care. As usual, the practical
limits end up being determined by the care put into engineering the
actual implimentation. :-)
Slainte,
Jim
[1] I regret the term 'noise shaped' in this context since we are talking
about a deterministic process, but it became the standard term, so we
seem to be stuck with it!
--
Electronics
http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc
http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio
http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc.
http://www.st-and.demon.co.uk/JBSoc/JBSoc.html