In article , Rob
wrote:
Arny Krueger wrote:
"Rob" wrote in message
...
I can guess the background (in methodological terms) to the test you
cite, and I'd happily it with you here or elsewhere.
It's pretty simple. We lined up the highest quality live and recorded
analog audibo sources we could in one of top recording studios in
the region, and compared a short piece of wire with a device that put
the audio signal into CD format and then conveted it back to a
regular audio signal. We found no audible difference, using a variety
of musicians, audio engineers and experienced audiophiles as our
listeners.
Again, you're confusing methodology with method.
Do you mean by "methodology" here, the reasons for the choice of the
specific experimental method and protocol used? If so, see below...
I also have a few issues with method mentioned elsewhere in this
thread.
What are they?
I have no 'expert' knowledge of testing protocols in this context. I
would have thought any lay person would point to:
Environmental variables - light, heat, seating, audience. Sample - did
you test their hearing acuity? It strikes me, and here I lapse into
stereotype, that the people involved were possibly middle aged men? Who
by training listen for and expect particular things? Whose hearing is
possibly past its best?!
In my experience it is not common in research reports or papers to give all
the details of why a given method was chosen.[1] They would normally be
summarised or taken as assumed on the basis that those working in the field
can be expected to have read the relevant background material for
themselves and should know already the strengths, weaknesses, and purposes
of specific methods or protocols for that specific area of study. e.g. they
would already know what main confounding or interfering factors would need
to be controlled or dealt with by the means employed.
The main exception to the above is where a 'new' method is being introduced
(or challenged), and the reasons for this should then either be given, or
explicitily referred to so the reader can look at the reference(s) to
decide this for themselves.
The above is probably why it seems that many experimental scientists tend
not to concern themselves with this as they just use the 'usual tools from
the toolkit'. However when a method/protocol is well established the normal
expectation is that anyone who wishes to challenge it has the onus on them
to do so, and to give both (testable) reasons for their concerns and an
alternative which can be put into practice and judged by its behaviour.[2]
i.e. the methods/protocols themselves are also subject to the scientific
method.
Rob, if you are interested in the specifics for audio here, it might make
sense for you to join a body like the AES or find a suitable uni library.
This could probably lead to the info you require.
Slainte,
Jim
[1] Note, though, that this is mostly in areas quite different to audio
listening comparisons, etc.
[2] Doing so may then quickly lead to finding material already published
that covers the relevant points - or may not. Such is research. :-)
--
Electronics
http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc
http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio
http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc.
http://www.st-and.demon.co.uk/JBSoc/JBSoc.html