In article , Rob
wrote:
Jim Lesurf wrote:
[snip loads]
'Accept' is obviously too strong, although this is a body of evidence -
empirical field data that is often mirrored in similar 'tests',
conducted with a degree of comparison (the reviewer's own system
usually), and accompanied by certain measured data.
That is fair enough - provided we have established good reasons to accept
that the reviewers *can* reliably distinguish one component from another.
Alas, their simply asserting this would not suffice to establish this. Thus
repeated tests where they had to rely on the sounds alone would be more
useful as a basis. Unfortunately, they rarely do tests of that kind.
It seems reasonable to at least consider what they say, though. Although my
experience is that I, and others, often disagree with opinions in
magazines.
They put their ideas to the test in the sense that their reputation
depends upon user experience of what they write. IOW there *is* a case,
which varies between 'instantly dismissible' to 'highly persuasive'.
I would agree. Alas, we often have little evidence to use to assign a
location on that scale to a particular review comment. And in my experience
I disagree with them about as often as I agree. So I would get similarly
reliable 'views' by tossing a coin. :-)
An additional snag is that unless we use *their* system in *their*
listening room, and play the same selection of music, it may be largely
irrelevant if we would have agreed with them if we'd been with them at the
time. Too many other variables.
Thus the problem is often not that you can be certain that what they say
is wrong (although this is clear in some cases). The problem is that you
often have no way to tell if they are providing useful info, or irrelevant
nonsense.
Slainte,
Jim
--
Electronics
http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc
http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio
http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc.
http://www.st-and.demon.co.uk/JBSoc/JBSoc.html