In article , Johnny B Good
wrote:
You just need to ensure that the inevitable imperfections in the
technology can't produce detectable errors or undesirable behaviours. In
short, absolute perfection is not a requirement. You just need to make
sure that it's good enough by a wide enough margin, in this case, by
several orders of magnitude as it happens.
Alas "you just need" whilst certainly true in principle may not lead to the
makers invarably doing as required. Afraid its a bungle out there! :-/
Given that you usually have no idea what filter was used, and it
changes from one recording to another, this is a poser for making a
'perfect' DAC. But, again, you can help shove away from audibility
such issues by keeping with high rates until you get to the final DAC.
TBH, I don't think that matters except with perhaps earlier recordings
that may have used brickwall filtering into a non oversampling 16 bit
ADC.
You're still assuming that the DAC (and any prior downcoversion) *is*
essentially perfect. I'm afraid that in reality this isn't invariably true.
I'm pretty certain that even modern prosumer recorders use oversampling
and on the fly reduction to 16/44.1K or 16/48K just to neatly sidestep
the issue of analogue filtering effects.
Many ADC/DAC designs use oversampling (sometimes low-bit high
oversample). These help avoid some classes of problem. But at the expense
of exposing use to other more complicated kinds of flaws.
A fundamental problem here is that such systems tend to end up being
nonlinear 3rd (or higher) order feedback/folding systems. The earliest
practical consequences was that people started hearing 'tones' and 'buzzes'
in the background, or some ADCs/DACs 'locked up'. (Early SACD modulators
and demodulators did this, so Philips/Sony had to keep changing the designs
trying to find ones that didn't - or at least were less likely to do so.)
This is because above 2nd order, such systems can become finite state 'semi
chaotic' processors. In effect, they can become almost impossible to check
for such problems without a brute force check on all possible state/input
situations.
I'm afraid you'll have to offer an explanation (or a link to an
explanatory article) to convince me as to how a low pass filter with a
turnover frequency in the region of 30 to 50KHz or higher can impact the
replay of a 20Hz to 20KHz band of signals in a CDDA replay system.
Erm. You seem to have your telescope the wrong way around here. My point is
that using higher rates helps avoid the risks. I can't recall claiming that
all low rate DACs *will* sound poor. Indeed, many seem fine to me. What I
have said is that using high rates will help shove any problems further
away from being audible.
If you want to know some of the ways some DACs can foul up the results when
fed with low rates (and/or 16bit) you only have to read some of your own
comments about how people have discovered the problems. Then note that in
reality the relevant solutions haven't *always* been implimented in every
DAC made.
The reality is that any real DAC (or ADC or process) has to be a
compromise. I've lost count of how many measurements I've seen that show
things like anharmonic distortion, aliasing, etc for real world designs.
What is less clear is when this may or may not matter in terms of causing a
significant audible degrading. But I'm just saying that you can help dodge
any such uncertainty by playing what was recorded as 96k *as* 96k. Thus
avoiding any possible damage due to downconversion or the DAC.
Given that modern DACs play 96k/24 (or higher) quite happily I can't really
see much reason *not* to do this. And it would seem annoying to me for
someone to downsample many of their recordings, then later realise it did
have an effect, but that they no longer have the higher rate sources.
Jim
--
Please use the address on the audiomisc page if you wish to email me.
Electronics
http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Armstrong Audio
http://www.audiomisc.co.uk/Armstrong/armstrong.html
Audio Misc
http://www.audiomisc.co.uk/index.html