In article , Arny Krueger
wrote:
"Glenn Richards" wrote in message
If there's a test method you can suggest that will prove or disprove
this, which doesn't involve excessive effort on my part, I'll follow
it up and let you know the results.
Strictly speaking, a test can only prove a hypothesis or fail to support
it. Absolute disproof is difficult.
To nit-pick a bit. (A tendency of aged rambling ex-academics. :-) )
I tend to be wary of saying that the results of any expermental test
either 'prove' or 'disprove' a hypothesis. I prefer to leave such terms
to mathematicians, lawyers, and other sorts of theologians. ;-
I prefer to consider this as follows:
The results of a test would either be 'consistent with' or 'conflict with'
a hypothesis with some given level of 'confidence'. The level of
'confidence' is based on being able to assess the results in the usual ways
applied to experimental data. e.g. via suitable statistical analysis of a
set of results. The details of all this would vary from one idea and test
method/results to another. Also with the level of risk that an outcome
arose due to an error of some kind.
However if we have a number of well run trials/tests that deliver results
people have examined with due caution and find consistent and convincing,
then we'd tend to accept the idea that 'passed the tests' as being 'valid'
as a model which has shown worth. The more such, the more confident we
can be in accepting what was a hypothesis as a reliable idea.
Whereas, if tests which seem to stand up to critical scrutiny show results
that conflict with, or contradict, a hypothesis, then we'd tend to decide
to treat the idea with some caution, and perhaps discard it as being
unreliable, and hence of no real worth as a model.
I tend to approach 'science' from the viewpoint that any theory may
eventually have to be discarded or modified **given suitable evidence**
which would justify this. Hence I regard ideas and theories as all being
potentially 'provisional' until we find evidence that allows us to
discard/alter a previous idea and move to a more reliable one, or one
that covers a wider range of circumstances, or gives more accuracy, etc.
Thus my view is more utilitarian and provisional than terms like 'prove'
or 'disprove' are often taken to imply. There always tends to be a
non-zero risk that we are mistaken, but we can hope to reduce this to
the level where we can neglect it with some safety *if* we use
proper methods. :-)
Ignoring the above nit-picking, however, we can use test results to form a
view as to if a given idea shows any real merit, and so decide if
it should be accepted or if it should be discarded as unreliable. The
strength of this decision would depend on the quality and care of the
test(s), and the extent and detail of their results.
However, by the same token, 'tests' which do not employ an appropriate
protocol, and/or do not give results which can be assessed, are essentially
worthless since we can't really use them to decide if their 'results' are
determined by the proposed idea or not. Thus their outcomes aren't really
'evidence' in terms of the scientific approach. This does not auomatically
mean the ideas behind them are 'wrong' - just that the test gives us no way
to tell, one way or the other.
As such, the above has nothing to do with 'subjective' versus 'objective'.
It is simply a matter of arranging to get results which can be assessed
for their level of reliability, etc.
[snip]
The world is full of people who have had some sucess in IT and think
that that means they know more about audio than the old experienced
hands. Don and Jim are old experienced hands with audio.
Alas, in my case the 'old' part probably now dominates. :-)
Slainte,
Jim
--
Electronics
http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc
http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio
http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc.
http://www.st-and.demon.co.uk/JBSoc/JBSoc.html