In article , Paul B
wrote:
Thus spake Jim Lesurf:
I can't cite examples but my impression FWIW, is that calibrated
variations can be fairly large before becoming significant. If that
is indeed the case, I suggest that using DB testing for testing
auditory differences is largely pointless.
I don't regard it as 'pointless'. Its 'point' is to indicate what the
actual limits of perception may be, regardless of the beliefs or
wishes of the individual. The results show that people often tend not
to be able to hear differences that they believe they can.
If subjects can't hear fairly large differences in a calibration cycle,
I can envisage 2 explanations. Firstly, some/many/most subjects are
fairly insensitive to variations & by definition, would be wasting
money by buying expensive audio equipment for sonic reasons alone.
That may be so. However, if so, we would then have to be cautious about
trying to draw specific/individual conclusions from the above as it is a
generalisation. So some people *might* be able to hear *some* differences
when others cannot.
But to see if this is the case, we would first need some test subjects to
demonstrate in a suitable test that *they* *can* hear a given 'difference'
even if (many) others cannot. Otherwise the simplest hypothesis consistent
with results may be that - despite claims to the contrary - *no one* can
hear a given 'difference'.
The second, is that because the way the mind works, comparing sequences
such as replaying the same piece of music is going to confuse the
subjects & muddy the results. I can imagine this explanation being very
inconvenient to many because it throws in hidden variables such as how
reliable human memory is & its effects on the outcome. I only entertain
this possibility because my own experience suggests measuring
qualitative stuff can be damned difficult. A lot of people also state
they can hear differences beyond measurability.
The problem with the above is as follows:
IIUC there is good evidence to the effect that our memory and state of mind
affect what we notice, or how we perceive or judge what we experience.
This may be a reason for saying that 'time serial' comparison tests are
affected by this, so tending to reduce the noticibility of real
differences.
However this may also mean that people hear 'differences' which are due
simply to their change in mental (or physiological) state, etc. Thus they
may be saying that one item sounds different to another when the actual
sounds produced are unchanged.
Thus the same 'mechanism' produced to 'explain' why such tests tend to show
people unable to hear a difference also 'explains' why they may think they
hear differences in situations where none really exists.
The upshot being that we then have no reliable evidence that any such
differences exist. But plus having a reason for saying that what people
claim may be based on an error.
This would also give us grounds to say, "since the perceptions are
variable, there is no real point in worrying about differences so slight at
to fall within these variations".
Thus we end up with "a differences which makes no difference *is* no
difference". (Spok's Rule.) :-)
The advantage of some of the ABX forms of test is that the comparisons can
be done on all sorts of time scales - under the control of the test
subject. So they can switch quickly if worried about 'memory' or drifts in
their physiology, etc. For some kinds of difference this seems IIRC to
produce enhanced sensitivity. But for others it shows no sign of the
subjects being able to hear any difference, on any timescales people have
employed.
I heartily wish I could suggest alternatives but I can't.
Well, from the POV of the scientific method a hypothesis has to be
testable to have any validity/meaning. So if you/someone can't propose
and carry out an appropriate alternative we have to stick with
hypotheses we *can* test. This is to avoid people simply believing
whatever they choose, regardless of the reality.
But only if the tests are valid & don't end up perpetuating a fallacy.
If it meant going back to the drawing board, so be it.
The problem with *if* here is that it is a speculation. That has no real
use in the scientific method *unless* you can then propose a test which
would distinguish you hypothesis from the competing ones...
Thus a given test *might* not be 'valid'. But to decide this would require
a suitable test, ideally also a proposed 'mechanism' for the cause of the
lack of 'validity' which the new test would probe.
Without that, we have to work on the basis of using the hypotheses that are
consistent with the evidence we have, and trying to avoid adding mechanisms
which the evidence does not require, or ideas we cannot test.
Many things *might* be the case. But that does not tell us they *are* the
case. For that we require relevant evidence. Alas, "the evidence does not
agree with my beliefs" is not actually evidence... :-)
Slainte,
Jim
--
Electronics
http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc
http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio
http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc.
http://www.st-and.demon.co.uk/JBSoc/JBSoc.html