
April 8th 10, 06:40 PM
posted to uk.rec.audio
|
|
Media player to DAC
On 08/04/2010 16:29, Jim Lesurf wrote:
In , Rob
wrote:
On 08/04/2010 12:45, Arny Krueger wrote:
Rule number one is that when you do comparisons like this, you take
the high sample rate file and downsample it yourself, which is easy to
do with free software that can downloaded from the web.
Why's that - are Naim not to be trusted?
Erm... I've not checked, but I presume they are making the files available
for people to listen to rather than use as examples for assessing the
effect of *only* changing the sample rate and/or bit-depth.
Not sure what "trust" has to do with that *unless* Naim have stated that
the *only change* was to downsample one version. Even then I'd personally
want to know the details of the process to be able to understand what
effect that may or may not have.
See below, and I was just wondering if there's any convention here with
the offer of two sample rates, where any difference is contestable
(unlike mp3s, where most people acknowledge a difference).
However I would "trust" then to do their best to make good sounding
versions if their purpose is to produce material people want to listen to.
Without other evidence, though, I don't know what they'd think the best way
to do that. So don't know what they would do to make versions at different
sample rates, etc.
When doing such things on a scientific/academic basis you want to know all
the details as they may affect the results for reasons that differ from the
assumptions that otherwise might be made.
The context in such terms is that I think others have already found that
some dual format commercial releases show things like differences in level
compression, made because those producing the versions assumed something
different was 'better' for the different (assumed) target audiences for the
two versions.
There are also various choices that could be made when using one version to
create the other, that then vary the output. e.g. I understand that at one
time Tony Faulkner preferred a simplistic form of downsampling that doesn't
actually meet the sampling theorem. He preferred the results, presumably
because he thought it made a 'change' that he liked. Or because it
minimised in-band filtering at the expense of aliasing.
That's really why I ask - I think. If there's more than one way to
downsample properly, I'm stuffed.
|

April 9th 10, 07:58 AM
posted to uk.rec.audio
|
|
Media player to DAC
In article , Rob
wrote:
That's really why I ask - I think. If there's more than one way to
downsample properly, I'm stuffed.
In principle 'downsampling' should be done 'properly' and will then lead to
a uniquely defined results - even if done in various algorithmic ways.
But in practice any downsampling or resampling can produce its own
(needless in theory) alterations that vary with the method used.
And in practice the vendors/creators may well add on other 'alterations'
they regard as an 'improvement' for each specific version. They may well
not admit this, or say how they did it. All part of the magic of
'mastering', etc. From the same people who brought us CDs that are clipped
and level-compressed to death because they "know" people "like" that. sic
But I have no idea what Naim have done. Might be able to tell once an
analysis has been carried out. I'd expect them to have avoided the insane
clipping, etc. But for all I know, they do other things because they judge
it gives 'better' results.
Slainte,
Jim
--
Please use the address on the audiomisc page if you wish to email me.
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html
Audio Misc http://www.audiomisc.co.uk/index.html
|

April 9th 10, 12:29 PM
posted to uk.rec.audio
|
|
Media player to DAC
Jim Lesurf wrote:
In article , Rob
wrote:
That's really why I ask - I think. If there's more than one way to
downsample properly, I'm stuffed.
In principle 'downsampling' should be done 'properly' and will then lead to
a uniquely defined results - even if done in various algorithmic ways.
That's not so.
Downsampling always involves a reduction in Nyquist frequency. It's
necessary therefore to filter the input to make sure frequencies above
this are sufficiently reduced. That filter can never be perfect, and
there will be various tradeoffs, involving extra loss of top-end,
in-band ripple and 'wrap-around' garbage from insufficient rejection of
higher-than-Nyquist signal. It's all down to what the person doing it
thought would be best (by some arbitrary criterion), and there is no
unique or 'right' answer.
--
Mike Scott (unet2 at [deletethis] scottsonline.org.uk)
Harlow Essex England
|

April 9th 10, 01:18 PM
posted to uk.rec.audio
|
|
Media player to DAC
"Mike Scott"
wrote in message
Jim Lesurf wrote:
In article , Rob
wrote:
That's really why I ask - I think. If there's more than
one way to downsample properly, I'm stuffed.
In principle 'downsampling' should be done 'properly'
and will then lead to a uniquely defined results - even
if done in various algorithmic ways.
That's not so.
If you are defining "uniquely defined" as being some precise bit pattern,
then I am forced to agree.
Downsampling always involves a reduction in Nyquist
frequency. It's necessary therefore to filter the input
to make sure frequencies above this are sufficiently
reduced. That filter can never be perfect, and there will
be various tradeoffs, involving extra loss of top-end,
in-band ripple and 'wrap-around' garbage from
insufficient rejection of higher-than-Nyquist signal.
That would be one of those things that is true - theoretically, but from an
audibility standpoint, is not true.
The big difference is how sophisticated we have become in terms of designing
and implementing digital filters.
It's all down to what the person doing it thought would
be best (by some arbitrary criterion), and there is no
unique or 'right' answer.
If computational resources are highly estensible, it is possible to product
digital filters with very nearly ideal phase and amplitude characteristics.
The realm of perceptual studies have also improved - we now know that the
ideal phase characteristic for the required brick wall filter is neither
linear phase nor minimum phase. However, we base that knowlege on
experiments done at Nyquist frequencies well below 20 KHz, because sonically
innocious downsampling to 22 Khz has been routinely availble at a reasonble
cost for nearly a decade.
|

April 9th 10, 02:23 PM
posted to uk.rec.audio
|
|
Media player to DAC
In article , Arny
Krueger
wrote:
"Mike Scott" wrote in
message
Jim Lesurf wrote:
In article , Rob
wrote:
That's really why I ask - I think. If there's more than one way to
downsample properly, I'm stuffed.
In principle 'downsampling' should be done 'properly' and will then
lead to a uniquely defined results - even if done in various
algorithmic ways.
That's not so.
If you are defining "uniquely defined" as being some precise bit
pattern, then I am forced to agree.
The problem arises when the poster snips away a following comment and
ignores the distinction I made quite clearly. But which apparently escaped
his grasp. :-)
If computational resources are highly estensible, it is possible to
product digital filters with very nearly ideal phase and amplitude
characteristics.
Indeed. That is a matter of how much care, effort, and computation time,
the people involved are willing to apply to the process.
Slainte,
Jim
--
Please use the address on the audiomisc page if you wish to email me.
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html
Audio Misc http://www.audiomisc.co.uk/index.html
|

April 9th 10, 02:19 PM
posted to uk.rec.audio
|
|
Media player to DAC
In article , Mike Scott
wrote:
Jim Lesurf wrote:
In article , Rob
wrote:
That's really why I ask - I think. If there's more than one way to
downsample properly, I'm stuffed.
In principle 'downsampling' should be done 'properly' and will then
lead to a uniquely defined results - even if done in various
algorithmic ways.
That's not so.
Downsampling always involves a reduction in Nyquist frequency. It's
necessary therefore to filter the input to make sure frequencies above
this are sufficiently reduced.
Correct.
That filter can never be perfect, and there will be various tradeoffs,
involving extra loss of top-end, in-band ripple and 'wrap-around'
garbage from insufficient rejection of higher-than-Nyquist signal.
Also correct in practice. But you missed my "in principle" in what I wrote
above. (Which you have snipped away.) And presumbly then failed to
understand why I then went on to discuss how "in practice" will be
different - for reasons like the one you mention.
To remind you, what I wrote that you quoted above was immediately followed
by my saying:
But in practice any downsampling or resampling can produce its own
(needless in theory) alterations that vary with the method used.
Perhaps you failed to read that before leaping in. Pity, as understanding
it would have meant you'd have had no reason to write what you did. :-)
It's all down to what the person doing it thought would be best (by some
arbitrary criterion), and there is no unique or 'right' answer.
It is formally incorrect to say there is no "unique or right answer". The
formally correct and uniquely correct "answer" is to have all the in band
components preserved whilst losing all the out of band ones. This follows
from the sampling theorem, etc. That then represents the uniquely "correct"
answer in terms of information theory.
However, as my previous posting on this did point out (but you snipped and
ignored), in practice you tend to have to accept some level of
imperfection. Albeit very small if the resampling is well done.
Slainte,
Jim
--
Please use the address on the audiomisc page if you wish to email me.
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html
Audio Misc http://www.audiomisc.co.uk/index.html
|

April 9th 10, 11:06 AM
posted to uk.rec.audio
|
|
Media player to DAC
"Rob" wrote in message
news
There are also various choices that could be made when
using one version to create the other, that then vary
the output. e.g. I understand that at one time Tony
Faulkner preferred a simplistic form of downsampling
that doesn't actually meet the sampling theorem. He
preferred the results, presumably because he thought it
made a 'change' that he liked. Or because it minimised
in-band filtering at the expense of aliasing.
That's really why I ask - I think. If there's more than
one way to downsample properly, I'm stuffed.
Not only are there many different downsamplers, with vastly different levels
of accuracy, but there is a time-honored process of simply starting out with
differently mastered recordings.
|
Thread Tools |
|
Display Modes |
Hybrid Mode
|
|