What a sad excuse for a group this is...
Jim Lesurf wrote:
In article
,
Andy
Evans wrote:
However, if we test and compare two items or systems and find that the
listeners can't distinguish the sound using one from using the other,
then we have evidence that they need not take assumptions that they
"sound different" seriously when commenting on the items or systems.
*Unless* some other appropriately run test shows other results in the
form of evidence that can be assessed.
I don't think you wrote the above, Andy, despite taking the credit
for it.
It would help if you were to identify when you quote in the standard
manner. However...
I think the difficulty here is that "listeners" is a variable and so is
"test conditions". The test conditions would be not too difficult to
replicate, but the listeners could not be easily replicated, nor could
their emotional/health states at time of testing, even if they were.
This is why a number of different such tests have been done, using varied
listeners, and various situations. As the evidence rolls in, this gives
some statistical scope to the reliability of the results. Your objections
have been thought of, and repeatedly dealt with, over some decades.
I would hazard a guess that the quality, aural acuity and perceptual
sensitivity of a listening panel could not be easily standardised, and
since the whole experiment depends on their aural perception, I'd forsee
this as a logistical problem.
How would you suggest tackling this in logistical terms?
Since you didn't bother to reference who you were quoting, you'd have
to say who you are asking, and why they should do what you ask. :-)
However...
I/we don't need to "suggest" anything as people working on the topic have
*already* tackled the problems you raise, as indicated above. The tests
already done cover a range of cases and listeners, and there is the
tendency for the results to show that - regardless of beliefs to the
contrary - people often show no ability to hear the 'differences' they
assert they can. I lost count some years ago of how many different such
tests have been done using different groups of listeners, etc. People have
been doing them for over two decades to my knowledge.
Similarly, there are cases when listeners *can* distinguish one thing from
another and do so with statistical reliability, e.g. where the comparison
is for a large enough difference in level, or frequency response.
If you randomise the panel, this would not correspond to audiophile
listeners.
People have, as I point out, used both various 'audiophile' groups, and
other groups. So far as I know, the results are fairly consistent for
specific classes of items under examination - e.g. between amps. They
indicate what can, and cannot, be heard with any reliability in
various cases, by a range of people.
Maybe you would need to randomise a sample of audiophiles who
had already been tested for good hearing. Whether you would consider
musicians and audiophiles as equivalent would, additionally, truly set
the cat among the pidgeons.
It is, of course, open to you and anyone else to run their own properly
conducted tests, and report the results. No need for any "maybe" or
speculations which are unsupported by the evidence we already have. So, for
example, if you think a specific factor matters, or that some people are
'golden eared' then you can test your theory and see if the evidence
supports it. However if you check the history of what already has been done
you may well find that someone else has already tried the hypothesis you
have in mind, and found it didn't stack up when tested. So if you wish
to learn, then the standard academic science methods of a literature
search and doing your own experiments are yours to take up. :-)
Could you or anyone give me a clue here - an author perhaps? I've just
read something by Marc Perlman* - but I shouldn't think it's up your street!
Rob
* Marc Perlman (2004) Golden Ears and Meter Readers: The Contest for
Epistemic Authority in Audiophilia; 34; 783, Social Studies of Science
|