A Audio, hi-fi and car audio  forum. Audio Banter

Go Back   Home » Audio Banter forum » UK Audio Newsgroups » uk.rec.audio (General Audio and Hi-Fi)
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

uk.rec.audio (General Audio and Hi-Fi) (uk.rec.audio) Discussion and exchange of hi-fi audio equipment.

Too neat to waste...



 
 
LinkBack Thread Tools Display Modes
  #1 (permalink)  
Old September 4th 06, 02:05 PM posted to uk.rec.audio
Rob
external usenet poster
 
Posts: 84
Default Too neat to waste...

Jim Lesurf wrote:
In article , Rob
wrote:
Jim Lesurf wrote:



Indeed - I don't know that 'nobody knows', I just think they can't prove
it. The specific case being - an LP and CD from the same master. The LP
sounds better to some people. Why?


In order to be able to attempt to answer your question we'd first need to
know all the specific details of any experimental comparison that lead to
such a conclusion. This would be first to see if they *could* actually tell
the difference (solely on the sounds), and then to try and form some idea
of what the reasons might be.

It is quite possible to speculate as to possible reasons. However without
suitable data that might be able to show which speculations stood up, and
which didn't, that would be all they would be.

I can't help feeling that as things stand we have much more in the way of
assertions with no details, and speculations, than we do cases which are
documented in a way that would allow us to decide which speculations stood
up.


I couldn't agree more - it is difficult. It's all well and good saying
that 'such and such' (vinyl is better than CD, whatever) was reported -
it's understanding why. Some researchers disagree with me strongly on
this point btw - on the grounds that 'if it's significant to the
respondent, it's significant to the thesis'. This is broadly a
constructionist's approach, and you can 'construct reality' in this way.

The problem is that people say things like they prefer one to the other,
but then don't provide any basis for assessing what they say.

Also, bear in mind that the kinds of methods I tend to describe are not
my invention. If you look at the literature on perception/hearing
and the related areas of physiology, etc, you will find that they are
routine.

For example, if you look at the articles on 'hearing' on the Audio
Misc pages you can find some references to journal articles. Some
of those contain other references to literally hundreds of other
research papers. Many of these report the details of tests which use
the methods I describe, and have produced a great deal of evidence
and understanding related to such topics.


That wouldn't surprise me at all. I did start to plough through some of
that literature a while ago, and when I get time I'll do you the
courtesy of a more systematic critique.

I don't expect anyone to accept the points I make simply because I
say them. But people can read the detailed reports I am referring
to for themselves if they wish, and form their own conclusions.
Alas, in general, the UK consumer magazines don't make any mention
of this, so people tend to be unaware of just how much work has
been done.


I'd be more comfortable if you could relax around the notion that hard
and fast conclusions are simply not accessible to most. Having a
preference is relatively easy - understanding why is rather more
complicated (enter Natural Don:-)).


Sometimes. Alas, this can be technobabble at times, or simply
nonsense. Varies.


I find it difficult to make *any* sense of it. I used to read Noel
Keywood's reviews/technical notes on reviews with some interest, but
they often appeared to contradict the subjective report. Just plain
confusing.


Alas, I would not recommend you place too much on 'measurements' of the
kinds that appear in such reports. The problem may be that the measurements
is inappropriate, or misrepresented in the reports. And that other
measurements which might shed light on the matter are omitted. Also that -
as I suspect you have discovered - you can find that the subjective
comments in different reviews/mags often disagree.

This is one of the persistent problems with the UK reviews. They may
contain some 'measured results'. However the person doing the review
may not have really understood which measurements might be relevant,
or how to interpret them sensibly. The resulting muddle undermines both
the review and any confidence that measurements can be useful.

Given this, I'd agree with your comment. To me, it just seems like many
such reviews are essentially worthless, I'm afraid. Some may not, but how
can we tell if we only have such poor reviews to go on?

Bear in mind that the person who *designed* the equipment being reviewed
probably spend many months making all sorts of measurements on it
as it was developed - as well as listening to the results it produced.
They probably did a far wider range of such measurements than the
reviewer. They may then have a much better ability than the reviewer
to be able to relate measured results to actual performance in use.

The reviewer may simply not have the time or the equipment. Nor perhaps
the ability, to replicate this in the limited scope of a magazine
review. Alas, in some cases a reviewer may persistently misunderstand
the meaning of the measurements and the results they produce. Given
all this it is understandably why the published result may seem so
unsatisfactory. It all depends on the individual reviewer, etc.


Yes, more's the pity. ISTR one magazine carried reviews with a right to
reply for a while - that was interesting.



I am afraid that I am biassed by my own time in the biz, and by many
later occasions. Too often I found by personal experience that what
people claimed didn't stand up when I tried listening or testing for
myself, or when I was involved in comparisions or tests with others.
Thus I have become rather doubtful of what is published in the UK
magazines with no basis in evidence being given.


We're all biased, and you're right I think to try and carve out a
reliable and replicable method that removes bias. But this is also a
methodological point, and relates to beliefs (biases) that all that
exists can be expressed in a 'scientifically rigorous' way.


That isn't what I have been saying, though. :-)

I agree that we can't expect to be able to understand *everything*.

But we may well be able to make some progress and learn things which we
previously did not know. And then use that understanding as a basis for
learning more. And to use this to improve things in various ways.

My point is that people can *try* to do so. If they do, in some cases they
may succeed. In others, they may need to re-try and adapt the details of
the methods. This does not mean we can then explain everything by next
Thursday. :-)

To me, trying, and seeing if you make progress in some cases, it better
than not even bothering to try. And as I point out above, people are
systematically studying relevant areas using the approaches I describe.
It is just that you don't tend to hear about it in the UK consumer
magazines!

Alas the UK magazine reviews generally don't even start the process since
they don't normally establish there *is* an audible difference between the
specific items they compare. Nor do they provide any reliable way for us to
decide if you or I or anyone else would agree with them in each specific
case. Maybe a given review is reliable, maybe not. But we generally can't
tell from the review itself.


More's the pity. I don't tend to buy magazines any more partly for that
reason, and partly because editorials and features are simliarly
meaningless. If I want pulp reviews the web is fine. Strangely, even
though I don't use a PC much nowadays, I find Computer Shopper half
decent and I might buy a couple of those a year.

Rob
  #2 (permalink)  
Old September 4th 06, 03:15 PM posted to uk.rec.audio
Jim Lesurf
external usenet poster
 
Posts: 3,051
Default Too neat to waste...

In article , Rob
wrote:
Jim Lesurf wrote:
In article , Rob


[snip]

I don't expect anyone to accept the points I make simply because I say
them. But people can read the detailed reports I am referring to for
themselves if they wish, and form their own conclusions. Alas, in
general, the UK consumer magazines don't make any mention of this, so
people tend to be unaware of just how much work has been done.


I'd be more comfortable if you could relax around the notion that hard
and fast conclusions are simply not accessible to most. Having a
preference is relatively easy - understanding why is rather more
complicated (enter Natural Don:-)).


I'm quite happy with the idea that most people often find it easier to make
personal judgements about such matters without the bother of worrying about
checking to see if their approach has any real rigour. That is fine if
people are making their own first-hand assessments *only* for their
personal decisions. Up to them what errors they may make. :-)

My concern is that people may then decide that their results *are* reliable
as a conclusion that would apply more generally, or even be universal, or
inherent to the entire class of items. And to then state their conclusions
to others as if this were the case. Also, that when such informal 'tests'
are done in magazines, by the idea that the views of the reviewer mean
anything to others with any real reliability.

My particular concern is that people are paid to write reviews in magazines
and others then read them, and may be mislead. And that people may accept
what they are told without being in a position to assess this for themself.
I feel that a professional who may be seen by readers as an 'expert' has a
duty of care to ensure that the methods they use to reach the conclusions
they publish *are* methodical and appropriate, and could be assessed for
reliability.

That said: I tend to approach such things according to the old Chinese
Maxim about: "Give a man a fish and feed him for a day. Teach him to fish
and he can feed himself for life." :-) Hence I much prefer the idea that
articles, etc, should explain to readers how to understand and make up
their own minds, if necessary being critical of the reports they read. Not
just present the reviewer's opinions and judgements as if their reaching
them made them correct.

Hence I object to reviews and comments for which no assesable basis is
given, or where it seems likely that the methods used may mean the results
may be an unreliable guide for anyone other than the person making the
claims/comments. And why I try to encourage people to be critical and to
try and form an understanding of their own, not just to accept what 'gurus'
in magazines tell them.

Must admit that when I see reviews of multi-thousand-pound items I often
wonder if the best conclusions would have been "save your money and then
spend it on some more of your favourite recordings." I suspect that would
do far more to increase the level of enjoyment than 'upgrading' by spending
vast amounts. But I guess I am an old cynic. ;-



The reviewer may simply not have the time or the equipment. Nor
perhaps the ability, to replicate this in the limited scope of a
magazine review. Alas, in some cases a reviewer may persistently
misunderstand the meaning of the measurements and the results they
produce. Given all this it is understandably why the published result
may seem so unsatisfactory. It all depends on the individual reviewer,
etc.


Yes, more's the pity. ISTR one magazine carried reviews with a right to
reply for a while - that was interesting.


It was quite common in the late 1950's and early 1960's for the mag to
carry some comments alongside the review from the maker or designer. Also
for them to be consulted whilst the reviewer was testing the product to
ensure what he found was not an error on his part.

Indeed, if you look back at UK reviews in those days in a mag like HFN
you find that many of the reviewers were also designers who developed
equipment themselves. Examples like Stan Kelley and George Tillett
spring to mind. (USA readers may know George. In the UK he designed
amplifiers for firms like DECCA and Armstrong, but then emigrated
to the USA.) At that time the UK hifi scene was a small, and generally
friendly, familiy. The advantage was that most of the reviewers really
knew their topic as they worked on designs themselves.

This slowly changed, though, as companies became more competitive, and
a distict breed of reviewers became more common who were people who
specialised in doing reviews, and regarded this and magazine writing
as their profession/job.

However by the late 1970's it became common for reviewers to start offering
their services as a 'consultant'. If you paid, they'd do a sort of 'private
review' for you on a prototype. This was long before the same person might
then do the actual printed review. This earned them more cash. But it was a
bit of a racket. The problem was that makers came to feel that they had to
do this to ensure that the reviewer wouldn't find any 'serious problems' in
the actual printed review.

Also, some makers and designers started to get a reputation in the biz of
either 'pressurising reviewers' behind the scenes or becoming very
'friendly' with a reviewer. You would hear tales of how X took Y out to
dinner, or they were seen a lot together, etc.

The whole process started to become open to undue influence, and the worry
that shady dealings were going on. Even if not always well-founded, such
rumours and suspicions undermined confidence.

So around the end of the 1970's the UK magazines decided that they'd push
for reviewers *not* meeting with makers/designers of reviewed items until
after the actual review was published. This isolated the reviewers from
some of the pressures and the rumours of dirty tricks. But it also meant
that they made daft errors in reviews which a 10 min chat with the designer
or makers would have sorted out. And it also meant that any feedback tended
to appear in the magazine 2-3 months later when people had forgotten most
reviews anyway. So most makers and designers decided to let most review
errors and nonsenses pass without specific comment. Simpler to rely on the
fact that in most cases the reviews of a product tended to disagree, and a
given error was rarely made in more than one review of a given product. So
this was all treated as being like the British weather. Not ideal, but put
up with it. :-)

All of the above relates to the UK. I can't say what the situation has
been elsewhere.

Slainte,

Jim

--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On



All times are GMT. The time now is 06:47 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.SEO by vBSEO 3.0.0
Copyright ©2004-2025 Audio Banter.
The comments are property of their posters.