View Single Post
  #1 (permalink)  
Old September 4th 06, 02:05 PM posted to uk.rec.audio
Rob
external usenet poster
 
Posts: 84
Default Too neat to waste...

Jim Lesurf wrote:
In article , Rob
wrote:
Jim Lesurf wrote:



Indeed - I don't know that 'nobody knows', I just think they can't prove
it. The specific case being - an LP and CD from the same master. The LP
sounds better to some people. Why?


In order to be able to attempt to answer your question we'd first need to
know all the specific details of any experimental comparison that lead to
such a conclusion. This would be first to see if they *could* actually tell
the difference (solely on the sounds), and then to try and form some idea
of what the reasons might be.

It is quite possible to speculate as to possible reasons. However without
suitable data that might be able to show which speculations stood up, and
which didn't, that would be all they would be.

I can't help feeling that as things stand we have much more in the way of
assertions with no details, and speculations, than we do cases which are
documented in a way that would allow us to decide which speculations stood
up.


I couldn't agree more - it is difficult. It's all well and good saying
that 'such and such' (vinyl is better than CD, whatever) was reported -
it's understanding why. Some researchers disagree with me strongly on
this point btw - on the grounds that 'if it's significant to the
respondent, it's significant to the thesis'. This is broadly a
constructionist's approach, and you can 'construct reality' in this way.

The problem is that people say things like they prefer one to the other,
but then don't provide any basis for assessing what they say.

Also, bear in mind that the kinds of methods I tend to describe are not
my invention. If you look at the literature on perception/hearing
and the related areas of physiology, etc, you will find that they are
routine.

For example, if you look at the articles on 'hearing' on the Audio
Misc pages you can find some references to journal articles. Some
of those contain other references to literally hundreds of other
research papers. Many of these report the details of tests which use
the methods I describe, and have produced a great deal of evidence
and understanding related to such topics.


That wouldn't surprise me at all. I did start to plough through some of
that literature a while ago, and when I get time I'll do you the
courtesy of a more systematic critique.

I don't expect anyone to accept the points I make simply because I
say them. But people can read the detailed reports I am referring
to for themselves if they wish, and form their own conclusions.
Alas, in general, the UK consumer magazines don't make any mention
of this, so people tend to be unaware of just how much work has
been done.


I'd be more comfortable if you could relax around the notion that hard
and fast conclusions are simply not accessible to most. Having a
preference is relatively easy - understanding why is rather more
complicated (enter Natural Don:-)).


Sometimes. Alas, this can be technobabble at times, or simply
nonsense. Varies.


I find it difficult to make *any* sense of it. I used to read Noel
Keywood's reviews/technical notes on reviews with some interest, but
they often appeared to contradict the subjective report. Just plain
confusing.


Alas, I would not recommend you place too much on 'measurements' of the
kinds that appear in such reports. The problem may be that the measurements
is inappropriate, or misrepresented in the reports. And that other
measurements which might shed light on the matter are omitted. Also that -
as I suspect you have discovered - you can find that the subjective
comments in different reviews/mags often disagree.

This is one of the persistent problems with the UK reviews. They may
contain some 'measured results'. However the person doing the review
may not have really understood which measurements might be relevant,
or how to interpret them sensibly. The resulting muddle undermines both
the review and any confidence that measurements can be useful.

Given this, I'd agree with your comment. To me, it just seems like many
such reviews are essentially worthless, I'm afraid. Some may not, but how
can we tell if we only have such poor reviews to go on?

Bear in mind that the person who *designed* the equipment being reviewed
probably spend many months making all sorts of measurements on it
as it was developed - as well as listening to the results it produced.
They probably did a far wider range of such measurements than the
reviewer. They may then have a much better ability than the reviewer
to be able to relate measured results to actual performance in use.

The reviewer may simply not have the time or the equipment. Nor perhaps
the ability, to replicate this in the limited scope of a magazine
review. Alas, in some cases a reviewer may persistently misunderstand
the meaning of the measurements and the results they produce. Given
all this it is understandably why the published result may seem so
unsatisfactory. It all depends on the individual reviewer, etc.


Yes, more's the pity. ISTR one magazine carried reviews with a right to
reply for a while - that was interesting.



I am afraid that I am biassed by my own time in the biz, and by many
later occasions. Too often I found by personal experience that what
people claimed didn't stand up when I tried listening or testing for
myself, or when I was involved in comparisions or tests with others.
Thus I have become rather doubtful of what is published in the UK
magazines with no basis in evidence being given.


We're all biased, and you're right I think to try and carve out a
reliable and replicable method that removes bias. But this is also a
methodological point, and relates to beliefs (biases) that all that
exists can be expressed in a 'scientifically rigorous' way.


That isn't what I have been saying, though. :-)

I agree that we can't expect to be able to understand *everything*.

But we may well be able to make some progress and learn things which we
previously did not know. And then use that understanding as a basis for
learning more. And to use this to improve things in various ways.

My point is that people can *try* to do so. If they do, in some cases they
may succeed. In others, they may need to re-try and adapt the details of
the methods. This does not mean we can then explain everything by next
Thursday. :-)

To me, trying, and seeing if you make progress in some cases, it better
than not even bothering to try. And as I point out above, people are
systematically studying relevant areas using the approaches I describe.
It is just that you don't tend to hear about it in the UK consumer
magazines!

Alas the UK magazine reviews generally don't even start the process since
they don't normally establish there *is* an audible difference between the
specific items they compare. Nor do they provide any reliable way for us to
decide if you or I or anyone else would agree with them in each specific
case. Maybe a given review is reliable, maybe not. But we generally can't
tell from the review itself.


More's the pity. I don't tend to buy magazines any more partly for that
reason, and partly because editorials and features are simliarly
meaningless. If I want pulp reviews the web is fine. Strangely, even
though I don't use a PC much nowadays, I find Computer Shopper half
decent and I might buy a couple of those a year.

Rob