View Single Post
  #79 (permalink)  
Old January 15th 06, 12:44 PM posted to uk.rec.audio
Jim Lesurf
external usenet poster
 
Posts: 3,051
Default DBT in audio - a protocol

In article .com,
andy
wrote:
Jim Lesurf wrote:

[sni[]

If the test subject *is* being affected by this 'external' (i.e.
nothing to do with the sound) info, it would then show in a 'bias' on
the "being told" tests that changed their results from the "not being
told" ones....


This is not quite what I intended. There is only one experiment but what
is seen and what is heard are independently controlled enabling the
effect of both on the results to be determined. I did not specify how to
achieve the visual deception or suggestion which would be an important
part of the design of the experiment.


OK. The primary difficulties here would hing on validation, which I will
try to explain below...

The difficulty with this process is that it can only be expected to be
useful if the test subject has not been warned that when they are
"told", *what* they are "told" may be deliberately incorrect at times.


Of course but if you wish to quantatively measure the effect there is no
alternative of which I am awa you have to separate sight/suggestion
and sound.


I would agree. However there are some distinct purposes he

1) A test which is intended to see if the test subject can tell one cable
from another, based solely on the sounds produced.

2) A test which is intended to see if a subjects's ability to correctly
identify which cable is in use is affected by a specific form of 'non
audible' infomation - e.g. by being able to 'see' which is in use.

These would be different experimental aims, so it would be reasonable to
employ different experimental test arrangements for them. Thus a test may
be suitable for (1) but not (2) or vice versa. You then chose the test
that is relevant. Ideally, one tries to deal with one issue at a time, so
to avoid complexity and extra risk of unexpected problems, avoid a test
that tries to combine both unless there is a pressing reason to the
contrary.

[snip]

Also, they may come to suspect this whilst being tested, so cease to
rely on what is said, and this might also cause any actual 'bias' to
vanish.


If this happens then it should be visible in the results and in
discussion with the subject at the end.


That assumes that we have collected enough data to do so with statistical
reliablily *and* that they suddenly decided this with confidence at some
point. Given that we don't know if/when this may have happened, or when,
it would make for much longer test routines, and much harder to analyse
with a given level of confidence (in statistical terms). This in turn may
well make other problems - like fatigue or impatence or varying auditory
ability for other reasons - worse. Hence I would say this should be
avoided for such reasons.

So such an approach would be quite difficult to validate.


Validate? I cannot see anything to validate but this may be a difference
of terminology.


I will try to explain using your previous posting:


On 13 Jan in uk.rec.audio, andy wrote:
Don Pearce wrote:
No, please try again. I really didn't understand how sighted bias
could be factored out of this situation.


Consider an experiment where the subject sees the 4 pairs:


A B, A B, A B, A B


but actually hears the 4 pairs:


A B, B A, A A, B B


If the subject claims to hear (D=different and S=same):


D D D D then 100% correlated with sight and 0% corrleated with sound


D D S S then 0% correlated with sight and 100% correlated with sound


S S S S then 0% correlated with sight and 0% correlated with sound


The first indicates the subject is biased by sight and cannot tell from
the sound, the second indicates the subject is not biased by sight and
can tell from the sound and the third is not biased by sight but cannot
tell from the sound.


The problem is with the reliability and confidence of the 'indications' or
'implications' you draw.

Since you are symultaneously trying to determine *two* factors then the
test routine will tend to have to be longer, and the analysis more extended
to get a given level of confidence.

You then have the problem that the person may or may not accept what they
"see" as meaning anything, and this may vary during the test in a way you
would have to hypothesise about after the event. The conclusion that the
result was X% sight and Y% auditory above assumes the factors remain
constant throughout the set of decisions they made. Given that you have no
way to measure this, you have to make an assumption about it, which may be
wrong. How do you check - by measurement - your assumption? Without this
you may have injected into your process an assumption you can't validate.


This can be explained in another way by considering - what actual question/
are you either explicitly or implicitly asking of the test subject? Ann
what statement of 'information' perhaps sic accompanies it?

If your 'question' has the form "I am telling you that cable A is being
used. Am I lying?" Then their focus may be on trying to read your
expression or tone of voice to see if you are lying or not, and not
on the actual sounds. It also immediately alerts them to the probability
that your statement may be false, thus spoiling the chance that being
told "cable A is being used" will have any effect at all.

The point here is that what you may wish to know is "Does *believing*
that cable A is being used cause them to think there *is* an audible
difference even if none actually exists?" In normal use they would
reliably know which cable was in use, and would not need to decide
if this was an error. But the form of your test, and the statement/
question you present changes this, so the pre-conviction regarding
which cable *is* being used is absent. Hence your test is not
of what you may be interested in examining, and the test method
is not valid for the hypothesis you are actually interested in testing.


Obviously one needs to take more samples to get a reasonable level of
confidence in the results. The required number of samples will also be
signficantly more than that required with blind testing for the same
level of confidence.


Indeed. However the real problem is that you also then have to try and fit
a result where the degree of any influence due to sight may alter during
the collection sequence, and in a way you cannot independently measure.

As presented, the experiment is also almost certainly too naive to get
accepted by most subjects. Complexity will need to be added.
Nonetheless, hopefully, the principle is clear.


I would say that 'complexity' is actually part of what makes some people
wish to reject a test. Hence your proposal may be seen as having loopholes
and hidden assumptions that then cause people to say it is invalid.

It also removes the ability of the 'ABX' methods where the test subject can
refresh their impression of A or B when they wish (with confidence that
they *are* choosing A or B). This may be very useful in dealing with any
tendency either to 'forget' a sound, or time-dependent drifts in perception
or in other common-mode factors of the test arrangement.

Slainte,

Jim

--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html