View Single Post
  #41 (permalink)  
Old July 17th 15, 03:34 AM posted to uk.rec.audio
Johnny B Good
external usenet poster
 
Posts: 65
Default More audio tomfoolery

On Thu, 16 Jul 2015 10:13:29 +0100, Jim Lesurf wrote:

In article , Johnny B Good
wrote:
On Wed, 15 Jul 2015 20:56:38 +0200, John R Leddy wrote:


'Jim Lesurf[_2_ Wrote:
;94195']FWIW I've never felt that going as far as 192k/24 made much
sense for home replay. 96k/24 seems a convenient 'compromise' to me
given the use of decent replay equipment. But YMMV.

It is perhaps worth pointing out to people that if you covert to
flac you will usually find that the resulting 96k/24 file is *not*
twice as big as a 48k/24 flac from the same source.

In general there isn't a lot in the ultrasonic region, and the flac
compression can take advantage of this.

The main difference tends to be that there are more bits devoted to
'noise' in 24bit than 16bit. And flac will faithfully keep those
details.
I can't bring myself to allocate over a gigabyte of storage space to
a single-CD album. 24-bit 96kHz albums seem to average just under a
gigabyte which suits me fine. This aspect, and the fact I was willing
to convert my 24-bit 192kHz files to 24-bit 96kHz, allowed me to
change my first 24-bit 192kHz network audio player for one which has
a maximum 24-bit 96kHz playback. Truth be told, until participating
in this thread, I would've quite happily converted my files to 16-bit
48kHz if I had to and not thought any more about it.

I'd much rather a good quality production and master of a 24-bit
96kHz album, than a 24-bit 192kHz album of poor quality. Shame
someone decided it was easier to sell numbers than improved quality.
I would've preferred the better quality no matter what numbers were
associated with the file. Maybe that's a giveaway when thinking about
the relevant skills within the industry. To fall back on the public's
lack of knowledge seems a bit defeatist and insecure to me. That
said,
I guess we do tend to believe anything we're told and spend our money
accordingly.

Fortunately, I have such appalling taste in music none of this
probably matters a great deal anyway.


I've been following this discussion with a growing dismay as phrases
such as "96k/24 seems a convenient 'compromise' to me" started to rear
their ugly heads.


A guy by the name of Monty Montgomery presented a couple of very
interesting videos that nicely relate to the whole business of digital
audio (and video). The links to those videos can be found on this page:


http://xiph.org/video/


Yes, I've seen them in the past and would recommend them with one
caveat. cf below.



I read about halfway through to the key facts - I'll read the rest
later on- where he states unequivocally that 16 bit 44.1 CD audio far
exceeds the capabilities of even the most superhuman of hearing
abilities. IOW, once you're dealing with a finalised music performance
properly committed to CD, that's it as far as 'perfection' is
concerned.


The problem is that he does omit various factors that make reality
different from 'perfection'. This includes the DAC used as that's the
'final' version produced by the digital chain.

The key point to keep in mind for real engineers is that as a general
rule *every* process or conversion in a chain can be expected to degrade
or alter the information.


The only way that a 24/96 "Hi Definition" version is going to sound
any
better is if the final mixdown processing used to create the CD had
been comprehensively buggered up.


Afraid that isn't an absolute truth. The reality is more complex. Even a
technically perfect downconversion for the CD exposes the listener more
to any imperfections in their DAC. And alas, no practical engineered
system will be perfect.


He did make the point in his "Digital Show & Tell" video that the USB
eMagic unit he was using was already right on the edge of perfection
despite its ten year vintage. You might never be able to create an
absolutely perfect engineering solution but you can certainly get so
close that the resulting distortion products are at ludicrously low
levels of audibility, in this case by at least two or more orders of
magnitude.

You just need to ensure that the inevitable imperfections in the
technology can't produce detectable errors or undesirable behaviours. In
short, absolute perfection is not a requirement. You just need to make
sure that it's good enough by a wide enough margin, in this case, by
several orders of magnitude as it happens.


The point is that each stage will tend to alter the results. A perfect
Audio CD is only a beermat or car scarer if you ignore the stage of
being able to play it. :-) So the aim if you're concerned with quality
is to keep the problems well clear of the audible result *at every stage
along the way*. i.e. inc your DAC, etc.

Sadly, for most popular music and 'digital re-masterings' of analogue
studio recordings and professional multi-track recordings of live
performances, the 'buggering up' is the result of deliberate vandalism,
often in the name of 'winning the loudness wars'.


Certainly true. And one of the problems with downconversion is that it
tends to generate *higher* peak values in between the sampled instances.
So the simple act of downconversion can lead to a clipped result if the
source material was 'as loud as possible' without itself being clipped.
Again, what you get out here may depend on your DAC.

There is also a more basic problem people don't seem fully aware about.
But which does cause them to engage in activities like 'which
reconstruction filter do I like?'. 8-]

The optimum choice of reconstruction filter (and resampling filters)
depends on the filtering used in the ADC when the digital samples were
made from the incoming audio during recording. The meaning (information
payload)
of the sampled data values is determined by the ADC filtering. This is a
fundamental Information Theory point about which many engineers, etc,
seem totally unaware. To reconstruct an analogue shape you need to know
what input filer was used. Otherwise the result will be altered in ways
you can't predict.


That's more or less 'a given', especially in the early days when times
one sampling with brick wall filtering may have been used (I don't know
whether or not Philips' novel use of 4 x oversampling with 14 bit DACs
was inspired by the not so novel concept of a pre-existing oversampling
technique in the digital capture process).


Given that you usually have no idea what filter was used, and it changes
from one recording to another, this is a poser for making a 'perfect'
DAC. But, again, you can help shove away from audibility such issues by
keeping with high rates until you get to the final DAC.


TBH, I don't think that matters except with perhaps earlier recordings
that may have used brickwall filtering into a non oversampling 16 bit ADC.

I'm pretty certain that even modern prosumer recorders use oversampling
and on the fly reduction to 16/44.1K or 16/48K just to neatly sidestep
the issue of analogue filtering effects.

Once you're dealing with professional kit where the lowest recording
standard might start at a humble 24/48K and run all the way up to
24/192K, I've no doubt the input is done using the highest 24/192K
regardless and down converted to the selected storage format making the
effects of the input filter totally immaterial as far as playback of a
CDDA based music file is concerned.

I'm afraid you'll have to offer an explanation (or a link to an
explanatory article) to convince me as to how a low pass filter with a
turnover frequency in the region of 30 to 50KHz or higher can impact the
replay of a 20Hz to 20KHz band of signals in a CDDA replay system.


--
Johnny B Good