In article ,
UnsteadyKen
wrote:
Rob said...
I'd like to think I'd share a great deal of my new wealth, but that
aside, what would I be supposed to do with it all?
Fund a research project at a university to conduct the definitive
double blind cable sound test and settle the matter once and for all.
AFAICR there have been a number of 'double blind' (or, more precisely,
'carefully conducted and assessed') comparisions over the years. And these
have generally satisfied professional engineers such as those in the AES
that - with some specific qualifiers[1] - no differences were audible for
all kinds of amps, cables, etc.
FWIW I've also been involved in similar tests using students and colleagues
that arrived at much the same conclusions when judged by the actual
results. But didn't ever bother to try and 'publish' them because they
would be just "me too" simpler duplicates of ones already done. Academic
and Professional Journals are rarely very interested in publishing papers
that simply say "Yup, we tried the same test as A and B and C and... and
came out with the same conclusions". Too busy publishing more novel and
interesting results.
Regardless of that, some people simply refuse to accept the results of such
tests. Instead then tend to argue "Because the tests don't give a result
which agrees with me, it follows that the tests must be 'flawed'." This for
various reasons. The most obvious being that when they report cases where
they say they "can hear a difference" there is often no way to tell *why*
and if it is for the "reason" they assert since their own comparisons tend
to use methods where a number of other 'causes' than the one they espouse
might have caused a perceived 'difference'.
So, no, such tests *don't* "settle" such matters "once and for all".
Except for many professional engineers and academics who have an
understanding of how such tests need to be done to ensure meaningful
results are likely to be produced.
Oh, and longer-term readers of this group may recall that a large cash
prize was on offer for many years to be given to anyone who managed to show
they *could* reliably hear a 'difference'. *Without* any need to pay up
themselves if they failed. So the only 'risk' to the person was to spend
some time and perhaps be embarassed to find they could not show they could
hear a difference in such a test, despite their claims that differences
were 'obvious' (or some similar claim).
So far as I know, we had one person after another appear and claim they
*could* hear such differences... only to make their excuses and decline
actually engaging in the proposed test. This meant that the problem became
that it was impossible to test the claims "once and for all" because all
the people making claims declined to put what they said to a test! The
problem wasn't the need for "funding" of a Uni project.
Slainte,
Jim
[1] List of simple control factors like ensuring similar gain / levels /
response / etc. And a method that ensures the listener only can judge on
the actual sounds produced. Plus then avoiding various statistical and
method errors that can make results skew or become without assessable
reliability.
--
Please use the address on the audiomisc page if you wish to email me.
Electronics
http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Armstrong Audio
http://www.audiomisc.co.uk/Armstrong/armstrong.html
Audio Misc
http://www.audiomisc.co.uk/index.html