In article , Glenn Richards
wrote:
Jim Lesurf wrote:
The shame, here, is that I can think of at least one theoretical
mechanism by which bi-amping and bi-wiring might sound the same, but
differ from using one amp/wire. So the claims Glenn makes are
consistent with one physical model. But the way he carries out the
'test' means his report is virtually useless for assessing if his
results actually support *any* specific hypothesis. :-/
Feel free to send me some more information on this mechanism, either on
here, by email, or as a URL.
Go to the 'Scots Guide' (URL in my sig). Click on the 'Analog and Audio'
link. Scroll down that page and you'll find links to a few pages that
discuss this and explain a mechanism which *might* mean that bi-wired (and
bi-amped) systems could differ from conventional ones.
However this is simply a model of what can arise, in principle. There is no
experimental evidence I am aware of that shows this *does* cause audible
changes in any particular practical cases. (This is why it seems a shame to
me that your reports, and those other make, are done using 'tests' which
give results that have no real value in deciding such questions.)
The point here is that the effect I described can be expected to occur, but
at such a low level with most amps, etc, that it seems doubtful that the
difference would be large enough to be audible.
If there's a test method you can suggest that will prove or disprove
this, which doesn't involve excessive effort on my part, I'll follow it
up and let you know the results.
The 'test method' would be one designed to satisfy the requirements of the
scientific method. This could be done in various ways, so there is no
'unique' approach or protocol. In order to discuss and explain this, you
need to understand the actual scientific method, as that then allows you to
critically assess the proposed experimental protocol. The aim is to have a
'test' which can satisfy various requirements, including the following:
1) The test should be 'critical' in that its outcome may be able to clearly
be inconsistent a given or hypothesis or to be clearly consistent with it.
In effect, the intent is to try and 'catch out' and idea and find out if it
is flawed or incorrect, but using a test which can also show it is correct
if that happens to be the case.
2) The test should provide results in a form that both test method and
results can be assessed by others. The scientific method does not proceed
on the basis that 'Joe did a test and he's clever so I agree with him'. It
works on the basis that others can also examine the test, look for flaws,
and do their own assessment of the results.
3) It should be arranged to take into account the possible presence of some
'randomising' effects which may affect the result of an individual
obervation for reasons which those participating may not be aware at the
time.
4) It should be arranged to make 'common mode' any systematic alternative
factors that might otherwise give a result which is then misinterpreted. Or
at least as many of these as possible.
So, for example, in a 'listening' test we may have some variation in the
location of the head of the listener in the room acoustic. This may give
'random' changes from one 'trial' to another. Hence you'd have to repeat a
comparison a number of times to check if this was occurring. The number of
times involved would then set a probable limit on any such effect or show
it was ocurring and could be taken into account. (this is for '3' above.)
Similarly, for something like a change of wiring having the 'new' wires in
a different location to the 'old' ones and hence altering something like
the coupling to other cables, perhaps altering the hum level or something
else. Thus producing a change for reasons that were not the purpose of the
test.
Another example. If you use the same piece of music for comparing two
arrangements, then the physiology of human hearing needs to be taken into
account. e.g. hearing a loud section alters the sensitivity of the ears, so
if you play it again soon, your ears are physically different the second
time around. To deal with this, tests should be trialled a number of times,
swapping the order of the chosen arrangements, and see if this affects the
results.
The results can then be analysed by anyone who has access to the data
produced and knows the details of the test protocol.
The problem is that if you don't do the 'test' using such suitable methods,
the 'results' could mean anything. You might think they were due to the use
of two wires instead of one, but might just as easily have been due to some
other effect which did not occur to you at the time.
In practice, if you are not really familiar with this, and with the
scientific method, the best bet is to discuss the protocol arrangements
with other *in advance* to determine what makes sense, and for people to
point out the flaws/omissions in a given test protocol. If you really
understand the method, you will already know the basics of this, and be
able to suggest some of the other things that would be required. The
requirements will depend on the specific ideas being tested.
People tend to recommend protocols based on ABX or 'double blind' in part
to try and avoid results being affected by the expectations of the
participants, but also partly to try and remove 'external' factors, and to
give a protocol where repeated trials can provide data that can be analysed
to assess any 'random' or 'systematic' factors that might otherwise lead to
misleading or incorrect conclusions. However the point here is not the
specific choice of protocol, it is that the experiments are arranged in
order to deal with such issues.
It is up to you to decide if the effort is "excessive". Alas, the reality
is that unless you *do* employ suitable methods, the tests you report will
be essentially void of any value. The above does not require any particular
expense. But it takes some time and understanding to be able to do a useful
test which can provide valid results.
In practice, therefore the choice is between 'easy' tests whose results
have no real value (hence a waste of time for all concerned) or 'serious'
ones which may take time and effort, but might then deliver useful evidence
and *not* be a waste of time.
Slainte,
Jim
--
Electronics
http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc
http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio
http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc.
http://www.st-and.demon.co.uk/JBSoc/JBSoc.html