View Single Post
  #3 (permalink)  
Old March 9th 06, 08:52 PM posted to uk.rec.audio
Paul B
external usenet poster
 
Posts: 29
Default Cables - the definitive answer

Thus spake Jim Lesurf:
snipped
If subjects can't hear fairly large differences in a calibration
cycle, I can envisage 2 explanations. Firstly, some/many/most
subjects are fairly insensitive to variations & by definition,
would be wasting money by buying expensive audio equipment for
sonic reasons alone.

That may be so. However, if so, we would then have to be cautious
about trying to draw specific/individual conclusions from the above
as it is a generalisation. So some people *might* be able to hear
*some* differences when others cannot.


A maddeningly large sample may be needed.


That would depend on two factors, currently undetermined.

1) How small the fraction of the general population may be that can
actually detect a given, small, change. The smaller the fraction,
them more people in general you would have to test to get some
reliable idea of the number of people involved.

2) The extent to which such people are 'self identifying'. If it were
the case that those who keep insisting they can hear differences
which others cannon *can* do this, then they have identified
themselves from the general population. Thus if they then
demonstrated in a suitable test that they *can* do what they assert,
then we can know they exist - although that in itself won't tell us
what fraction of the general population they represent.

The snag, though, is that the general result of tests indicates that
people can't hear the small differences that they claim. Hence at
present the evidence is that the fraction of the population involved
is between 'small' and 'nil' according to curcumstances. i.e. a value
we can't reliably distinguish from 'nil'.


I subscribe to the idea that most healthy people have reasonably similar
hearing acuity & I wouldn't get too fixated on those hovering above the
'nil' threshold. I can understand that people can be trained to hear better
than average but 1 or 2 in a million people aren't worth designing equipment
for & what format apart from CDs, DVD-As, SACDs or records would they listen
to anyway?

I re-emphasise the need to check the calibration cycle in any DB test as a
fair means of determining its effectiveness. /If/ DB was proved to be
sufficiently effective, it would save a lot of head scratching over devising
different tests.

The advantage of some of the ABX forms of test is that the
comparisons can be done on all sorts of time scales - under the
control of the test subject. So they can switch quickly if worried
about 'memory' or drifts in their physiology, etc. For some kinds
of difference this seems IIRC to produce enhanced sensitivity. But
for others it shows no sign of the subjects being able to hear any
difference, on any timescales people have employed.


Months? I ask because I've either replaced or upgraded equipment,
listened & made a mental note that I could hear no differences then
forgotten about it. Months later, I've played a particular piece of
music to be struck by how different it sounds.


Indeed, that happens to me over timescales from hours to months,
using the same equipment and source material.

However some of the tests I recently read about in the JAES, Audio
Amateur, etc, involved loaning some people a 'black box' for some
months, and inviting them to decide on an 'AB' or 'ABX' basis what it
contained. This was in one case apparently done because those being
tested insisted that a 'quick' test was not 'sensitive', but
prolonged listening would enable them to be more discriminating. The
reported results showed no foundation for their belief.


JAES is members only. I would tend to conclude that /if/ these tests were
carried out correctly & are statically significant, that this form of DB
testing was pretty valid but what was being tested? If interconnect cables,
I would expect as much!

But only if the tests are valid & don't end up perpetuating a
fallacy. If it meant going back to the drawing board, so be it.

The problem with *if* here is that it is a speculation. That has no
real use in the scientific method *unless* you can then propose a
test which would distinguish you hypothesis from the competing
ones...

Thus a given test *might* not be 'valid'. But to decide this would
require a suitable test, ideally also a proposed 'mechanism' for the
cause of the lack of 'validity' which the new test would probe.


You make it seem that I advocate rolling dice Yes, of course it's
more satisfactory to forward a displacing theory rather than merely
suggesting the existing one is flawed but where would we be if
someone was to suggest that lead in cosmetics was dangerous & others
said that they would continue using it until the doubter came up
with a substitute. As for speculation Jim, much good science has
come from it.


Without that, we have to work on the basis of using the hypotheses
that are consistent with the evidence we have, and trying to avoid
adding mechanisms which the evidence does not require, or ideas we
cannot test.

Many things *might* be the case. But that does not tell us they
*are* the case. For that we require relevant evidence. Alas, "the
evidence does not agree with my beliefs" is not actually
evidence... :-)


Until someone comes up with a watertight explanation why DB is
infallible or near as dammit so, I'll reserve the right to be
sceptical in the same manner that I've been sceptical of my own
hearing. To sum up,


No test method or experiment is "infallible". I am afraid that
science does not work like that. What it does it gathers evidence so
we can use that to assess how reliable or useful a given idea may (or
may not) be. If you wish for "infallability" then I'm afraid you will
have to ask a theologian, not a scientist or an engineer. :-)


Hence the qualification of 'near as dammit' to indicate a methodology that
ain't too controversial.

I'm not suggesting that DB testing is completely pointless but IMO,
can't be relied upon as the sole means of testing, especially when
some use it as a club to bash people with the idea that most
equipment sounds essentially identical. I feel more comfortable with
folks being cloth-eared than folks having so-called golden ears!


I agree - but only if the speculation is testable and some evidence to
support it can be gathered and assessed. So if we say a given product
is 'dangerous' we migh then regard it with caution, but then expect
some evidence to back up the assertion. If no evidence can be
provided, we can decide to regard the assertion as having no reliable
substance.

We may change our minds at a later point if evidence *does* appear.
But the change of understanding should be based on evidence.

Otherwise we would have to work on the basis of never doing anything
at all because it "might be dangerous".


I'm having a longish weekend in Edinburgh, so I'll be giving this topic a
well earned rest. I'm hoping to make it to Leith where a friend has a pair
of Art Stiletto's I'm curious to hear.