In article , Rob
wrote:
Jim Lesurf wrote:
In article , Rob
[snip]
I don't expect anyone to accept the points I make simply because I say
them. But people can read the detailed reports I am referring to for
themselves if they wish, and form their own conclusions. Alas, in
general, the UK consumer magazines don't make any mention of this, so
people tend to be unaware of just how much work has been done.
I'd be more comfortable if you could relax around the notion that hard
and fast conclusions are simply not accessible to most. Having a
preference is relatively easy - understanding why is rather more
complicated (enter Natural Don:-)).
I'm quite happy with the idea that most people often find it easier to make
personal judgements about such matters without the bother of worrying about
checking to see if their approach has any real rigour. That is fine if
people are making their own first-hand assessments *only* for their
personal decisions. Up to them what errors they may make. :-)
My concern is that people may then decide that their results *are* reliable
as a conclusion that would apply more generally, or even be universal, or
inherent to the entire class of items. And to then state their conclusions
to others as if this were the case. Also, that when such informal 'tests'
are done in magazines, by the idea that the views of the reviewer mean
anything to others with any real reliability.
My particular concern is that people are paid to write reviews in magazines
and others then read them, and may be mislead. And that people may accept
what they are told without being in a position to assess this for themself.
I feel that a professional who may be seen by readers as an 'expert' has a
duty of care to ensure that the methods they use to reach the conclusions
they publish *are* methodical and appropriate, and could be assessed for
reliability.
That said: I tend to approach such things according to the old Chinese
Maxim about
: "Give a man a fish and feed him for a day. Teach him to fish
and he can feed himself for life." :-) Hence I much prefer the idea that
articles, etc, should explain to readers how to understand and make up
their own minds, if necessary being critical of the reports they read. Not
just present the reviewer's opinions and judgements as if their reaching
them made them correct.
Hence I object to reviews and comments for which no assesable basis is
given, or where it seems likely that the methods used may mean the results
may be an unreliable guide for anyone other than the person making the
claims/comments. And why I try to encourage people to be critical and to
try and form an understanding of their own, not just to accept what 'gurus'
in magazines tell them.
Must admit that when I see reviews of multi-thousand-pound items I often
wonder if the best conclusions would have been "save your money and then
spend it on some more of your favourite recordings." I suspect that would
do far more to increase the level of enjoyment than 'upgrading' by spending
vast amounts. But I guess I am an old cynic. ;-
The reviewer may simply not have the time or the equipment. Nor
perhaps the ability, to replicate this in the limited scope of a
magazine review. Alas, in some cases a reviewer may persistently
misunderstand the meaning of the measurements and the results they
produce. Given all this it is understandably why the published result
may seem so unsatisfactory. It all depends on the individual reviewer,
etc.
Yes, more's the pity. ISTR one magazine carried reviews with a right to
reply for a while - that was interesting.
It was quite common in the late 1950's and early 1960's for the mag to
carry some comments alongside the review from the maker or designer. Also
for them to be consulted whilst the reviewer was testing the product to
ensure what he found was not an error on his part.
Indeed, if you look back at UK reviews in those days in a mag like HFN
you find that many of the reviewers were also designers who developed
equipment themselves. Examples like Stan Kelley and George Tillett
spring to mind. (USA readers may know George. In the UK he designed
amplifiers for firms like DECCA and Armstrong, but then emigrated
to the USA.) At that time the UK hifi scene was a small, and generally
friendly, familiy. The advantage was that most of the reviewers really
knew their topic as they worked on designs themselves.
This slowly changed, though, as companies became more competitive, and
a distict breed of reviewers became more common who were people who
specialised in doing reviews, and regarded this and magazine writing
as their profession/job.
However by the late 1970's it became common for reviewers to start offering
their services as a 'consultant'. If you paid, they'd do a sort of 'private
review' for you on a prototype. This was long before the same person might
then do the actual printed review. This earned them more cash. But it was a
bit of a racket. The problem was that makers came to feel that they had to
do this to ensure that the reviewer wouldn't find any 'serious problems' in
the actual printed review.
Also, some makers and designers started to get a reputation in the biz of
either 'pressurising reviewers' behind the scenes or becoming very
'friendly' with a reviewer. You would hear tales of how X took Y out to
dinner, or they were seen a lot together, etc.
The whole process started to become open to undue influence, and the worry
that shady dealings were going on. Even if not always well-founded, such
rumours and suspicions undermined confidence.
So around the end of the 1970's the UK magazines decided that they'd push
for reviewers *not* meeting with makers/designers of reviewed items until
after the actual review was published. This isolated the reviewers from
some of the pressures and the rumours of dirty tricks. But it also meant
that they made daft errors in reviews which a 10 min chat with the designer
or makers would have sorted out. And it also meant that any feedback tended
to appear in the magazine 2-3 months later when people had forgotten most
reviews anyway. So most makers and designers decided to let most review
errors and nonsenses pass without specific comment. Simpler to rely on the
fact that in most cases the reviews of a product tended to disagree, and a
given error was rarely made in more than one review of a given product. So
this was all treated as being like the British weather. Not ideal, but put
up with it. :-)
All of the above relates to the UK. I can't say what the situation has
been elsewhere.
Slainte,
Jim
--
Electronics
http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc
http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio
http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc.
http://www.st-and.demon.co.uk/JBSoc/JBSoc.html