Audio Banter

Audio Banter (https://www.audiobanter.co.uk/forum.php)
-   uk.rec.audio (General Audio and Hi-Fi) (https://www.audiobanter.co.uk/uk-rec-audio-general-audio/)
-   -   The Outer Shell (https://www.audiobanter.co.uk/uk-rec-audio-general-audio/2524-outer-shell.html)

Chris Morriss November 28th 04 03:14 PM

The Outer Shell
 
In message , Fleetie
writes
"Jim Lesurf" wrote
This isn't simply a matter of whether the pressure level is 'positive' or
'negative' at any one time. The precise shape of the waveform matters, and
tiny details or changes in the shape of the pressure-time patterns can
produce audible effects.


Well yeah but any waveform is just a sum of a load of sinusoidal waves
anyway, by Fourier.

Depends how you look at it.

Anyway, this whole thing is a bit more complex than pressure-versus-time
anyway, because sound is NOT perceived by inputting an electrical
representation of the pressure signal into some wetware "black box" which
does processing on the signal to work out what the sound is.

Rather, in the cochlea, there's a tube, with a bit running along the middle
of it, and a load of tiny hairs, and IIRC, different points along that
structure detect different frequencies, and each hair (or maybe proximate
small group of hairs) sends a signal down a nerve to a part of the brain.
So it's far from simple, to imagine what kind of processing may be going on,
with all those many, many inputs to the brain.

A computer would typically recognise sound (e.g. speech recognition) by
analysing ONE input signal. This is much simpler than what's going on in
our ears/brains, though ISTM that it's possible that our system loses some
phase information.


Martin


But the ear can be thought of as ONE input being fed to a large number
of resonant filters, with the centre frequency of each filter being set
so that the whole collection covers the audio band. The level of each
of the filter outputs is fed to the brain. A bit like a fine-resolution
RTA. (OK, this is simplified, but it's the basic arrangement).
--
Chris Morriss

Jim Lesurf November 28th 04 04:39 PM

The Outer Shell
 
In article , Fleetie
wrote:
"Jim Lesurf" wrote
This isn't simply a matter of whether the pressure level is 'positive'
or 'negative' at any one time. The precise shape of the waveform
matters, and tiny details or changes in the shape of the pressure-time
patterns can produce audible effects.


Well yeah but any waveform is just a sum of a load of sinusoidal waves
anyway, by Fourier.


Yes. That is one way to represent or analyse the patterns.

Depends how you look at it.


Anyway, this whole thing is a bit more complex than pressure-versus-time
anyway, because sound is NOT perceived by inputting an electrical
representation of the pressure signal into some wetware "black box"
which does processing on the signal to work out what the sound is.


Agreed. However at the level of the eardrum the effect is that the eardrum
displacement essentially varies with time in a way that is driven by the
external pressure variations just above the ear.

Rather, in the cochlea, there's a tube, with a bit running along the
middle of it, and a load of tiny hairs, and IIRC, different points along
that structure detect different frequencies, and each hair (or maybe
proximate small group of hairs) sends a signal down a nerve to a part of
the brain. So it's far from simple, to imagine what kind of processing
may be going on, with all those many, many inputs to the brain.


Agreed again. FWIW I published an article on this in HFN about a year ago
that also explained the nonlinear physiology in the cochlea. Also used this
to consider human perception of 'time smear' in symmetric reconstruction
filters in a later HFN article. :-)

Indeed, the details at that level are very complex, and far from totally
understood.

However at the level of pressure variations in the air, the points I made
are, I think, reasonably good descriptions of what happens up to the actual
eardrum.

I would agree, though, that the simple pressure model does not normally
deal with things like the interference effects produced by the ear lobes,
etc, and internal vibrations, etc. Hence what I said was simplified as I
though that was appropriate for the situation I was trying to describe.

A computer would typically recognise sound (e.g. speech recognition) by
analysing ONE input signal. This is much simpler than what's going on in
our ears/brains, though ISTM that it's possible that our system loses
some phase information.


I'd agree in general terms, although at the eardrum level, we get nominally
two signal patterns, but these are modified by the external ear structures,
head shape, and head vibrations to some extent.

The real problem with synthesising a genuine soundfield is, I think, along
the lines the OP has mentioned. When you move your head whilst listening to
a 'stereo' audio system the results won't usually be the same as if you'd
sat in the the original soundfield being recorded and moved your head in a
similar way. This is partly due to two level-time patterns not being enough
to convey the full vector field (even at one point and we have two ears!).
Partly because even if stereo *did* record the fill vector info, we'd have
to record and replay this for the whole field. i.e. it implies some sort of
vector field recording. 8-]

The good news is that, despite all that, stereo can sound pretty good when
you get things 'about right'. So despite all the problems we can end up
enjoying the music. :-)

Slainte,

Jim

--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html

Kalman Rubinson November 28th 04 05:25 PM

The Outer Shell
 
On Sun, 28 Nov 2004 09:48:57 +0000 (GMT), Jim Lesurf
wrote:

Think of the outer parts of the ears as being pressure detectors. These
pick up the way in which the sound pressure varies with time, and then
convey this pressure-time pattern (or 'waveform') into the inner ear.


No detectors(?) in the outer ear, merely acoustic and impedance
trasnformers.

The inner ear then examines and analyses the vibration waveform and can
symultaneously recognise many different details.


Inner ear mechanisms transduce the pressure into electrochemical
signals and some processing is applied but I would not call it
anything close to examination and/or analysis.

Over and out.

Kal


Jim Lesurf November 29th 04 08:00 AM

The Outer Shell
 
In article , Kalman
Rubinson
wrote:
On Sun, 28 Nov 2004 09:48:57 +0000 (GMT), Jim Lesurf
wrote:


Think of the outer parts of the ears as being pressure detectors. These
pick up the way in which the sound pressure varies with time, and then
convey this pressure-time pattern (or 'waveform') into the inner ear.


No detectors(?) in the outer ear, merely acoustic and impedance
trasnformers.


Yes. Fair comment. Afraid I was guilty of tending to use a common practice
in experimental physics of sometimes referring to sensors/transducers/etc
as a 'detector'. Agree this can be misleading. The eardrum is part of a
physical system that converts air pressure variations into vibrational
movements of the bones, etc, linked to the eardrum.

The inner ear then examines and analyses the vibration waveform and can
symultaneously recognise many different details.


Inner ear mechanisms transduce the pressure into electrochemical signals
and some processing is applied but I would not call it anything close to
examination and/or analysis.


Again, fair comment. :-)

I would personally regard the phrase "some processing" as rather
underplaying what goes on in the cochlea, though. ;-

Slainte,

Jim

--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html

Spiderant November 30th 04 04:06 AM

The Outer Shell
 

"Jim Lesurf" wrote in message
...
In article , Fleetie
wrote:
"Jim Lesurf" wrote
This isn't simply a matter of whether the pressure level is 'positive'
or 'negative' at any one time. The precise shape of the waveform
matters, and tiny details or changes in the shape of the pressure-time
patterns can produce audible effects.


Well yeah but any waveform is just a sum of a load of sinusoidal waves
anyway, by Fourier.


Yes. That is one way to represent or analyse the patterns.

Depends how you look at it.


Anyway, this whole thing is a bit more complex than pressure-versus-time
anyway, because sound is NOT perceived by inputting an electrical
representation of the pressure signal into some wetware "black box"
which does processing on the signal to work out what the sound is.


Agreed. However at the level of the eardrum the effect is that the eardrum
displacement essentially varies with time in a way that is driven by the
external pressure variations just above the ear.

Rather, in the cochlea, there's a tube, with a bit running along the
middle of it, and a load of tiny hairs, and IIRC, different points along
that structure detect different frequencies, and each hair (or maybe
proximate small group of hairs) sends a signal down a nerve to a part of
the brain. So it's far from simple, to imagine what kind of processing
may be going on, with all those many, many inputs to the brain.


Agreed again. FWIW I published an article on this in HFN about a year ago
that also explained the nonlinear physiology in the cochlea. Also used
this
to consider human perception of 'time smear' in symmetric reconstruction
filters in a later HFN article. :-)

Indeed, the details at that level are very complex, and far from totally
understood.

However at the level of pressure variations in the air, the points I made
are, I think, reasonably good descriptions of what happens up to the
actual
eardrum.

I would agree, though, that the simple pressure model does not normally
deal with things like the interference effects produced by the ear lobes,
etc, and internal vibrations, etc. Hence what I said was simplified as I
though that was appropriate for the situation I was trying to describe.

A computer would typically recognise sound (e.g. speech recognition) by
analysing ONE input signal. This is much simpler than what's going on in
our ears/brains, though ISTM that it's possible that our system loses
some phase information.


I'd agree in general terms, although at the eardrum level, we get
nominally
two signal patterns, but these are modified by the external ear
structures,
head shape, and head vibrations to some extent.

The real problem with synthesising a genuine soundfield is, I think, along
the lines the OP has mentioned. When you move your head whilst listening
to
a 'stereo' audio system the results won't usually be the same as if you'd
sat in the the original soundfield being recorded and moved your head in a
similar way. This is partly due to two level-time patterns not being
enough
to convey the full vector field (even at one point and we have two ears!).
Partly because even if stereo *did* record the fill vector info, we'd have
to record and replay this for the whole field. i.e. it implies some sort
of
vector field recording. 8-]

The good news is that, despite all that, stereo can sound pretty good when
you get things 'about right'. So despite all the problems we can end up
enjoying the music. :-)

Slainte,

Jim

Hello Again Jim,

I had a hard time trying to figure out who "OP" was. The only OP I ever
know of was the kid on the old Andy Griffith show (an early 1960's North
American TV show). Then I realized you must have referred to my "Opening
Post."

After reading all the replies to my original query, I'm still not convinced
that we're hearing more than the outline or shadow of the music. I agree
that the final effect can be very good, especially with stereo, but I'm not
sure if all the information, or even a good portion of it, coming to the
original microphone(s) is being sent to the speakers. That being said,
these posts have also revealed to me my severe technological ignorance and I
know that, if I want to learn more about what goes into producing a signal,
I'll need to do more research. Ironically, I ended up taking all five
audio-related books out of our local library and, although they all talk
about sound waves and such, I'm missing the details of what exactly is
coming down the wires, and how much it reflects the real world.

But I'll keep looking and listening. And of course, I'll keep on reading
the excellent posts on this newsgroup.

Thanks again,

Roland Goetz.



Spiderant November 30th 04 05:20 AM

The Outer Shell
 

"Stewart Pinkerton" wrote in message
...
On Sun, 28 Nov 2004 01:50:23 GMT, "Spiderant"
wrote:

I really appreciate your pointing me in the right direction in this and
previous posts. I've come to the realization that my understanding of
basic
audio principles is very limited. I picked up some audio books from the
library, which I'll peruse before asking more questions. BTW Your
previous
post about analog waveforms will be the focus of my research.


Don't worry about it. Your willingness to learn places you very high
in the rankings of 'serious audiophiles'. It's always good to remember
that you should aleways keep an open mind, but be careful that your
brain does not fall out in the process! :-)

Out of curiousity Jim, why do you sign your emails with the term
"Slainte"?
I live on the West Coast of Canada and I've never heard the word. What
does
it mean?


Try some Scots Canadians! It's a Gaelic word meaning 'health', the
full expression is Slainte Mhor. Pronounced 'Slaandjivaa' It literally
means 'big health', but is taken as the ubiquitous 'cheers', and is
the appropriate toast for whisky drinkers.

As an aside, in Jacobean households during the early 18th century, the
'loyal' toast would often be said while passing the charged glass over
the top of the water jug, the toast being 'good health over the
water', a reference to the Pretenders to the Throne of Scotland, the
Stuarts who were in France at the time.
--

Stewart Pinkerton | Music is Art - Audio is Engineering


Thanks for the lore and legend. I actually tried out the toast while having
dinner with my Czech in-laws this evening (a weekly ritual) and, although my
daughter giggled, everyone else gave me a nervous look. Maybe my
pronunciation was off and my Germanic ancestry got in my way. Fortunately,
I do have a very good Scottish friend by the name of Ardel McKenna who is
entering the eighth decade of his life, and this will give me a good reason
to call him up and say hello.

I also know what you mean about the brain falling out in the process. I had
a bit of a relapse over the weekend when I felt that some records sounded
better than their vinyl counterparts. I even ended up buying a handful of
used records from the local Salvation Army. After a couple of hours of
enduring skipping, crackling and popping, I put on a couple of well-recorded
CDs and remembered why I had made the transition years ago. And the records
I picked up went out with the trash.

A couple of years ago I had a lot of fun building a vacuum tube pre-amp from
a "Foreplay" (don't ask) pre-amp kit available at Bottlehead.com. The site
is linked to a well-maintained, helpful and informative forum. While I was
building the kit, there were numerous threads talking about how designer
metal film resistors were "less noisy" or "more detailed" than traditional
carbon ones. One day I thought I'd try an experiment. I purchased a few of
the "designer" resistors, with some standard metal films, as well as a
number of carbon resistors. I even hit up a friendly elderly gentleman who
repaired televisions and such and asked him for a pair of the biggest 100
ohm carbon resistors he had. He dug his hands into this massive crate
filled with resistors plucked over the years and, reading the colour codes
off of the resistors as if they were written in plain English, he pulled out
a pair of ten watt (or thereabouts) resistors as thick as my thumbs. I
said, "Perfect." I went home and installed them on a pair of eight-position
switches and put them on the signal path near the input from the CD player
as part of the attenuator. With a notepad in hand, I then proceeded to
listen to the differences between the resistors. Much to my surprise, I
didn't hear any difference whatsoever between the resistors, not even
between the super thick carbon resistors and the metal film ones. I tried
prolonged listening over a two week period. I tried switching quickly with
my eyes closed. I tried it with various pieces of music and then with no
music whatsoever with the volume cranked right up. No added distortion. No
hiss. No "Warmer, but more strident." Nothing. If my daughter's life
depended on it, I wouldn't have been able to tell the difference between
carbon film and metal (and, for what it's worth, neither could my daughter
with her much more sensitive ears).

It was then that I realized that a lot of what people say they can hear
between components is mostly what they "believe" they can hear. This
doesn't mean that I haven't heard some horrible systems in my days (my
neighbour's Bang & Olufsens, for example--eesh). But I'm very skeptical
when people say they can hear differences between capacitors, resistors,
most wires, and even between most better-quality amps and CD/DVD players
I've listened to. And this is one of the reasons why I've become very
skeptical of audio claims.

And to finish the story, other than the annoying hum I couldn't seem to get
rid of, once I matched the volumes to the best of my ability, I really
couldn't tell the difference between the Foreplay pre-amp and the pre-amp
built into either my NAD 3020 or Yamaha AX-596. Consequently, the pre-amp
kit has now be retired to my garage.

Time to go listen to some Bach (I picked up a used CD--no pops, pits or
scratches, thank you--of Bach's French Suites as played by our own Glen
Gould), who always gives me great faith in the underlying order of the
universe.

Thanks again for keeping us straight,

Roland Goetz.



Stewart Pinkerton November 30th 04 06:32 AM

The Outer Shell
 
On Tue, 30 Nov 2004 05:06:30 GMT, "Spiderant"
wrote:


"Jim Lesurf" wrote in message
...
In article , Fleetie
wrote:
"Jim Lesurf" wrote
This isn't simply a matter of whether the pressure level is 'positive'
or 'negative' at any one time. The precise shape of the waveform
matters, and tiny details or changes in the shape of the pressure-time
patterns can produce audible effects.


Well yeah but any waveform is just a sum of a load of sinusoidal waves
anyway, by Fourier.


Yes. That is one way to represent or analyse the patterns.

Depends how you look at it.


Anyway, this whole thing is a bit more complex than pressure-versus-time
anyway, because sound is NOT perceived by inputting an electrical
representation of the pressure signal into some wetware "black box"
which does processing on the signal to work out what the sound is.


Agreed. However at the level of the eardrum the effect is that the eardrum
displacement essentially varies with time in a way that is driven by the
external pressure variations just above the ear.

Rather, in the cochlea, there's a tube, with a bit running along the
middle of it, and a load of tiny hairs, and IIRC, different points along
that structure detect different frequencies, and each hair (or maybe
proximate small group of hairs) sends a signal down a nerve to a part of
the brain. So it's far from simple, to imagine what kind of processing
may be going on, with all those many, many inputs to the brain.


Agreed again. FWIW I published an article on this in HFN about a year ago
that also explained the nonlinear physiology in the cochlea. Also used
this
to consider human perception of 'time smear' in symmetric reconstruction
filters in a later HFN article. :-)

Indeed, the details at that level are very complex, and far from totally
understood.

However at the level of pressure variations in the air, the points I made
are, I think, reasonably good descriptions of what happens up to the
actual
eardrum.

I would agree, though, that the simple pressure model does not normally
deal with things like the interference effects produced by the ear lobes,
etc, and internal vibrations, etc. Hence what I said was simplified as I
though that was appropriate for the situation I was trying to describe.

A computer would typically recognise sound (e.g. speech recognition) by
analysing ONE input signal. This is much simpler than what's going on in
our ears/brains, though ISTM that it's possible that our system loses
some phase information.


I'd agree in general terms, although at the eardrum level, we get
nominally
two signal patterns, but these are modified by the external ear
structures,
head shape, and head vibrations to some extent.

The real problem with synthesising a genuine soundfield is, I think, along
the lines the OP has mentioned. When you move your head whilst listening
to
a 'stereo' audio system the results won't usually be the same as if you'd
sat in the the original soundfield being recorded and moved your head in a
similar way. This is partly due to two level-time patterns not being
enough
to convey the full vector field (even at one point and we have two ears!).
Partly because even if stereo *did* record the fill vector info, we'd have
to record and replay this for the whole field. i.e. it implies some sort
of
vector field recording. 8-]

The good news is that, despite all that, stereo can sound pretty good when
you get things 'about right'. So despite all the problems we can end up
enjoying the music. :-)

Slainte,

Jim

Hello Again Jim,

I had a hard time trying to figure out who "OP" was. The only OP I ever
know of was the kid on the old Andy Griffith show (an early 1960's North
American TV show). Then I realized you must have referred to my "Opening
Post."


Close enough - it's Netspeak for Original Poster.

After reading all the replies to my original query, I'm still not convinced
that we're hearing more than the outline or shadow of the music. I agree
that the final effect can be very good, especially with stereo, but I'm not
sure if all the information, or even a good portion of it, coming to the
original microphone(s) is being sent to the speakers.


If you use CD, you'll find that pretty much *all* the information
which gets out of the microphone reaches the speaker - assuming a
clean mixing and mastering process, of course!

That being said,
these posts have also revealed to me my severe technological ignorance and I
know that, if I want to learn more about what goes into producing a signal,
I'll need to do more research. Ironically, I ended up taking all five
audio-related books out of our local library and, although they all talk
about sound waves and such, I'm missing the details of what exactly is
coming down the wires, and how much it reflects the real world.


That's always been an issue, and there have been brilliant attempts at
truly stereophonic (as in 'solid sound', not 2-channel) sound, the
4-channel Calrec Soundfield mic being perhaps the best. Unfortunately,
such 3-dimensional techniques never reached commercial reality, so
that even with the latest available technology we are stuck with
'flat' 5-channel surround sound in almost all cases.

But I'll keep looking and listening. And of course, I'll keep on reading
the excellent posts on this newsgroup.


Keep listening, that's the real thing. When you close your eyes and
listen to a good recording on a good system, can you really not
suspend your disbelief and get a feeling of 'being there'? If so, get
a better system!
--

Stewart Pinkerton | Music is Art - Audio is Engineering

Jim Lesurf November 30th 04 08:26 AM

The Outer Shell
 
In article qdTqd.370798$Pl.181867@pd7tw1no, Spiderant
wrote:

"Jim Lesurf" wrote in message


Hello Again Jim,


I had a hard time trying to figure out who "OP" was. The only OP I ever
know of was the kid on the old Andy Griffith show (an early 1960's
North American TV show). Then I realized you must have referred to my
"Opening Post."


I think the usual meaning is "Original Post" or similar, but your reading
seems just as appropriate as what I was assuming 'OP' means in this
context. You'll find that newgroup postings tend to use a lot of such
abbreviations. :-)

After reading all the replies to my original query, I'm still not
convinced that we're hearing more than the outline or shadow of the
music. I agree that the final effect can be very good, especially with
stereo, but I'm not sure if all the information, or even a good portion
of it, coming to the original microphone(s) is being sent to the
speakers.


Yes and no. :-)

Each microphone may pick up either 'air pressure variations' or 'air
displacement variations'. Thus a mic that senses pressure will give an
output voltage that varies with time that represents the way the sound
pressure at the microphone varies with time. Similarly, a displacement
sensor will give an output that conveys the displacement (or velocity) of
the air along a given direction. In each case they sense this at the
location of the mic.

Hence with a good microphone you can expect to get an output that indicates
the signal in terms of what frequencies are present, etc. But at a specific
location. In this limited respect, some microphones can do an excellent
job.

However the situation with 'stereo' and trying to indicate a 'soundfield'
(how the sound wave patterns vary in a volume of space) is much more
difficult. Given two ears, and the ability to move our head, we can explore
this. Stereo and surround try to mimic this by using more than one mic and
then trying to combine their outputs via 2 channels (stereo) or more with
the aim of producing a convincing 'sound image' when the sound is replayed.
The good news is that our hearing seems to be able to be lulled into
accepting the results.

To make things more difficult, the room we play the music in at home also
has its own acoustic properties which then tend to affect the result if we
use loudspeakers.

Slainte,

Jim

--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html

Arny Krueger November 30th 04 02:48 PM

The Outer Shell
 
"Spiderant" wrote in message
news:FiUqd.371399$Pl.140812@pd7tw1no

A couple of years ago I had a lot of fun building a vacuum tube
pre-amp from a "Foreplay" (don't ask) pre-amp kit available at
Bottlehead.com. The site is linked to a well-maintained, helpful and
informative forum. While I was building the kit, there were numerous
threads talking about how designer metal film resistors were "less
noisy" or "more detailed" than traditional carbon ones. One day I
thought I'd try an experiment. I purchased a few of the "designer"
resistors, with some standard metal films, as well as a number of
carbon resistors. I even hit up a friendly elderly gentleman who
repaired televisions and such and asked him for a pair of the biggest
100 ohm carbon resistors he had. He dug his hands into this massive
crate filled with resistors plucked over the years and, reading the
colour codes off of the resistors as if they were written in plain
English, he pulled out a pair of ten watt (or thereabouts) resistors
as thick as my thumbs. I said, "Perfect." I went home and installed
them on a pair of eight-position switches and put them on the signal
path near the input from the CD player as part of the attenuator. With a
notepad in hand, I then proceeded to listen to the differences
between the resistors. Much to my surprise, I didn't hear any
difference whatsoever between the resistors, not even between the
super thick carbon resistors and the metal film ones. I tried
prolonged listening over a two week period. I tried switching
quickly with my eyes closed. I tried it with various pieces of music
and then with no music whatsoever with the volume cranked right up. No
added distortion. No hiss. No "Warmer, but more strident." Nothing. If
my daughter's life depended on it, I wouldn't have been
able to tell the difference between carbon film and metal (and, for
what it's worth, neither could my daughter with her much more
sensitive ears).
It was then that I realized that a lot of what people say they can
hear between components is mostly what they "believe" they can hear. This
doesn't mean that I haven't heard some horrible systems in my
days (my neighbour's Bang & Olufsens, for example--eesh). But I'm
very skeptical when people say they can hear differences between
capacitors, resistors, most wires, and even between most
better-quality amps and CD/DVD players I've listened to. And this is
one of the reasons why I've become very skeptical of audio claims.


And to finish the story, other than the annoying hum I couldn't seem
to get rid of, once I matched the volumes to the best of my ability,
I really couldn't tell the difference between the Foreplay pre-amp
and the pre-amp built into either my NAD 3020 or Yamaha AX-596.
Consequently, the pre-amp kit has now be retired to my garage.

Time to go listen to some Bach (I picked up a used CD--no pops, pits
or scratches, thank you--of Bach's French Suites as played by our own
Glen Gould), who always gives me great faith in the underlying order
of the universe.


Been there, done that more-or-less.

I have a web site devoted to clarifying the fact that what people say they
can
hear between components is mostly what they "believe" they can hear.

www.pcabx.com




Stewart Pinkerton November 30th 04 04:44 PM

The Outer Shell
 
On Tue, 30 Nov 2004 06:20:21 GMT, "Spiderant"
wrote:

A couple of years ago I had a lot of fun building a vacuum tube pre-amp from
a "Foreplay" (don't ask) pre-amp kit available at Bottlehead.com.


Ahh, you have a healthy interest in S.E.X, do you? :-)

The site
is linked to a well-maintained, helpful and informative forum. While I was
building the kit, there were numerous threads talking about how designer
metal film resistors were "less noisy" or "more detailed" than traditional
carbon ones.


Actually, that's perfectly true - although the difference may not be
audible. It's certainly measureable.

One day I thought I'd try an experiment. I purchased a few of
the "designer" resistors, with some standard metal films, as well as a
number of carbon resistors. I even hit up a friendly elderly gentleman who
repaired televisions and such and asked him for a pair of the biggest 100
ohm carbon resistors he had. He dug his hands into this massive crate
filled with resistors plucked over the years and, reading the colour codes
off of the resistors as if they were written in plain English,


Yup, most of us old hands can do that. Yellow violet orange was always
instantly recognised in vinyl days! :-)

he pulled out
a pair of ten watt (or thereabouts) resistors as thick as my thumbs. I
said, "Perfect." I went home and installed them on a pair of eight-position
switches and put them on the signal path near the input from the CD player
as part of the attenuator. With a notepad in hand, I then proceeded to
listen to the differences between the resistors. Much to my surprise, I
didn't hear any difference whatsoever between the resistors, not even
between the super thick carbon resistors and the metal film ones. I tried
prolonged listening over a two week period. I tried switching quickly with
my eyes closed. I tried it with various pieces of music and then with no
music whatsoever with the volume cranked right up. No added distortion. No
hiss. No "Warmer, but more strident." Nothing. If my daughter's life
depended on it, I wouldn't have been able to tell the difference between
carbon film and metal (and, for what it's worth, neither could my daughter
with her much more sensitive ears).


Well to be fair, you wouldn't expect to with massive ten watt carbons.
A pair of half-watt cracked carbons might just have been audible, but
likely not. Personally, I can never hear the difference between
ordinary metal films and ultra-quality Vishay S102 bulk metals.

It was then that I realized that a lot of what people say they can hear
between components is mostly what they "believe" they can hear. This
doesn't mean that I haven't heard some horrible systems in my days (my
neighbour's Bang & Olufsens, for example--eesh). But I'm very skeptical
when people say they can hear differences between capacitors, resistors,
most wires, and even between most better-quality amps and CD/DVD players
I've listened to. And this is one of the reasons why I've become very
skeptical of audio claims.


Very wise. If you actually could hear such differences, you could pick
up enough cash on this newsgroup to buy a new Michell Orbe............

And to finish the story, other than the annoying hum I couldn't seem to get
rid of, once I matched the volumes to the best of my ability, I really
couldn't tell the difference between the Foreplay pre-amp and the pre-amp
built into either my NAD 3020 or Yamaha AX-596. Consequently, the pre-amp
kit has now be retired to my garage.


Well, that just shows that the Foreplay was pretty good! :-)

In a level-matched double-blind test driving Apogee Duetta Signatures,
I could only just tell a difference between a Yamaha AX-570 and a
Krell KSA-50 mkII, just a *tiny* bit of treble brightness on the
Yammy.

Time to go listen to some Bach (I picked up a used CD--no pops, pits or
scratches, thank you--of Bach's French Suites as played by our own Glen
Gould), who always gives me great faith in the underlying order of the
universe.

Thanks again for keeping us straight,


That's why we're here! :-)
--

Stewart Pinkerton | Music is Art - Audio is Engineering


All times are GMT. The time now is 12:55 PM.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
SEO by vBSEO 3.0.0
Copyright ©2004-2006 AudioBanter.co.uk