Audio Banter

Audio Banter (https://www.audiobanter.co.uk/forum.php)
-   uk.rec.audio (General Audio and Hi-Fi) (https://www.audiobanter.co.uk/uk-rec-audio-general-audio/)
-   -   The Outer Shell (https://www.audiobanter.co.uk/uk-rec-audio-general-audio/2524-outer-shell.html)

Spiderant November 25th 04 03:03 AM

The Outer Shell
 
I once had a philosophy professor who casually mentioned to the class that
when we listen to a recorded piece of music, we don't hear the entire
spectrum of the music, but only the outer shell. He explained that when,
for example, a classical symphony is recorded, only the extreme peaks and
valleys of the signal are picked up and when the recording is played back,
because the speakers can only move in one direction at any given moment, you
will only hear these peaks and valleys, and none of the filler in-between.
I know that I'm not explaining this using proper audio terminology, but his
explanation seems logical to me. If, for example, a clarinet and a flute
are playing at the same time, all we will ever hear from the recording is
the "combined" signal.

The result of this is that, no matter how good the recording is, we can
never truly hear the individual instruments which, of course, negates things
like "air" around the instruments (unless, of course, there is a space
between the actual notes). In fact, we can never hear the entire orchestra,
nor differentiate between the instruments playing. All we hear is the
shadow of the music.

If this idea is way off, please correct me. I have very little technical
knowledge, but I do love music. Any help would be greatly appreciated.

Roland Goetz.





Eiron November 25th 04 05:22 AM

The Outer Shell
 
Spiderant wrote:
I once had a philosophy professor who casually mentioned to the class that
when we listen to a recorded piece of music, we don't hear the entire
spectrum of the music, but only the outer shell. He explained that when,
for example, a classical symphony is recorded, only the extreme peaks and
valleys of the signal are picked up and when the recording is played back,
because the speakers can only move in one direction at any given moment, you
will only hear these peaks and valleys, and none of the filler in-between.
I know that I'm not explaining this using proper audio terminology, but his
explanation seems logical to me. If, for example, a clarinet and a flute
are playing at the same time, all we will ever hear from the recording is
the "combined" signal.


What did your professors of biology and physics think of your
philosophy professor?

Nick Gorham November 25th 04 06:17 AM

The Outer Shell
 
Eiron wrote:
Spiderant wrote:

I once had a philosophy professor who casually mentioned to the class
that when we listen to a recorded piece of music, we don't hear the
entire spectrum of the music, but only the outer shell. He explained
that when, for example, a classical symphony is recorded, only the
extreme peaks and valleys of the signal are picked up and when the
recording is played back, because the speakers can only move in one
direction at any given moment, you will only hear these peaks and
valleys, and none of the filler in-between. I know that I'm not
explaining this using proper audio terminology, but his explanation
seems logical to me. If, for example, a clarinet and a flute are
playing at the same time, all we will ever hear from the recording is
the "combined" signal.



What did your professors of biology and physics think of your
philosophy professor?


Saves me the trouble of saying something similar.

When the pressure waves from the two instruments get to your ears, they
have combined in just they same way, so to use your prof's description,
we can never heard the two instruments at that point. Its the processing
the brain does that allows us to decide the combined sound is actually
the product off two sources, and its just the same whenhearing the
recording.

--
Nick

Stewart Pinkerton November 25th 04 06:52 AM

The Outer Shell
 
On Thu, 25 Nov 2004 04:03:56 GMT, "Spiderant"
wrote:

I once had a philosophy professor who casually mentioned to the class that
when we listen to a recorded piece of music, we don't hear the entire
spectrum of the music, but only the outer shell. He explained that when,
for example, a classical symphony is recorded, only the extreme peaks and
valleys of the signal are picked up and when the recording is played back,
because the speakers can only move in one direction at any given moment, you
will only hear these peaks and valleys, and none of the filler in-between.
I know that I'm not explaining this using proper audio terminology, but his
explanation seems logical to me. If, for example, a clarinet and a flute
are playing at the same time, all we will ever hear from the recording is
the "combined" signal.


Had he been a *physics* professor, he would have known better........

The result of this is that, no matter how good the recording is, we can
never truly hear the individual instruments which, of course, negates things
like "air" around the instruments (unless, of course, there is a space
between the actual notes). In fact, we can never hear the entire orchestra,
nor differentiate between the instruments playing. All we hear is the
shadow of the music.

If this idea is way off, please correct me. I have very little technical
knowledge, but I do love music. Any help would be greatly appreciated.


It's way off. All you have to do is listen to a good recording played
on a good system, and you'll realise that the guy was talking utter
********.
--

Stewart Pinkerton | Music is Art - Audio is Engineering

Jim Lesurf November 25th 04 08:08 AM

The Outer Shell
 
In article MQcpd.321783$nl.260854@pd7tw3no, Spiderant
wrote:
I once had a philosophy professor who casually mentioned...


[snip]

If this idea is way off, please correct me. I have very little
technical knowledge, but I do love music. Any help would be greatly
appreciated.


Afraid that if you report them accurately, then his ideas tell us that his
understanding of physics, physiology, etc, was pretty limited. I'd
recommend that you regard his views on this as misleading and misguided.
Think of them as being a "philosopher's song" version of the subject. :-)

Slainte,

Jim

--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html

Keith G November 25th 04 11:52 AM

The Outer Shell
 

"Spiderant" wrote in message
news:MQcpd.321783$nl.260854@pd7tw3no...
I once had a philosophy professor who casually mentioned to the class that
when we listen to a recorded piece of music, we don't hear the entire
spectrum of the music, but only the outer shell. He explained that when,
for example, a classical symphony is recorded, only the extreme peaks and
valleys of the signal are picked up and when the recording is played back,
because the speakers can only move in one direction at any given moment,
you will only hear these peaks and valleys, and none of the filler
in-between. I know that I'm not explaining this using proper audio
terminology, but his explanation seems logical to me. If, for example, a
clarinet and a flute are playing at the same time, all we will ever hear
from the recording is the "combined" signal.




If that's the case, take your audio kit to the nearest recycling centre and
swap it for a three piece suite......






Chris Morriss November 25th 04 05:46 PM

The Outer Shell
 
In message MQcpd.321783$nl.260854@pd7tw3no, Spiderant
writes
I once had a philosophy professor who casually mentioned to the class that
when we listen to a recorded piece of music, we don't hear the entire
spectrum of the music, but only the outer shell. He explained that when,
for example, a classical symphony is recorded, only the extreme peaks and
valleys of the signal are picked up and when the recording is played back,
because the speakers can only move in one direction at any given moment, you
will only hear these peaks and valleys, and none of the filler in-between.
I know that I'm not explaining this using proper audio terminology, but his
explanation seems logical to me. If, for example, a clarinet and a flute
are playing at the same time, all we will ever hear from the recording is
the "combined" signal.

The result of this is that, no matter how good the recording is, we can
never truly hear the individual instruments which, of course, negates things
like "air" around the instruments (unless, of course, there is a space
between the actual notes). In fact, we can never hear the entire orchestra,
nor differentiate between the instruments playing. All we hear is the
shadow of the music.

If this idea is way off, please correct me. I have very little technical
knowledge, but I do love music. Any help would be greatly appreciated.

Roland Goetz.




What a plonker he was. And what were his views on the eardrum of the
listener, (being a diaphragm etc:)
--
Chris Morriss

Ian Bell November 25th 04 06:10 PM

The Outer Shell
 
Spiderant wrote:

I once had a philosophy professor who casually mentioned to the class that
when we listen to a recorded piece of music, we don't hear the entire
spectrum of the music, but only the outer shell. He explained that when,
for example, a classical symphony is recorded, only the extreme peaks and
valleys of the signal are picked up and when the recording is played back,
because the speakers can only move in one direction at any given moment,
you will only hear these peaks and valleys, and none of the filler
in-between. I know that I'm not explaining this using proper audio
terminology, but his
explanation seems logical to me. If, for example, a clarinet and a flute
are playing at the same time, all we will ever hear from the recording is
the "combined" signal.

The result of this is that, no matter how good the recording is, we can
never truly hear the individual instruments which, of course, negates
things like "air" around the instruments (unless, of course, there is a
space
between the actual notes). In fact, we can never hear the entire
orchestra,
nor differentiate between the instruments playing. All we hear is the
shadow of the music.

If this idea is way off, please correct me. I have very little technical
knowledge, but I do love music. Any help would be greatly appreciated.

Roland Goetz.


I think you will find most of this group will tell you that your philosophy
professor is completely wrong.

Ian
--
Ian Bell

Spiderant November 26th 04 01:57 AM

The Outer Shell
 

"Chris Morriss" wrote in message
...
In message MQcpd.321783$nl.260854@pd7tw3no, Spiderant
writes
What a plonker he was. And what were his views on the eardrum of the
listener, (being a diaphragm etc:)
--
Chris Morriss


I actually did think about that. When listening to a live performance, all
the music is hitting my eardrums silmutaneously (well, maybe not
silmutaneously as, from what I understand, some frequencies travel faster
than others). Consequently, as per your suggestion, I would only hear the
combined instruments--but only if I held my head exactly the same way and,
perhaps, only if the musicians held perfectly still. But as soon as I would
turn my ears towards, say, the clarinets, then they would dominate over the
violins, and so on. And when the solo pianist would start to play, I would
turn my head towards him or her and the piano would dominate. As a result,
a live performance would seem much more dimensional, would it not? Since a
recording can only play the combined signal from a stationary point,
regardless of how I would turn my head when listening to my speakers, I
don't see how I could distinquish the instruments in the same way.

Please correct me if I'm wrong.

Roland Goetz.



Spiderant November 26th 04 02:32 AM

The Outer Shell
 

"Ian Bell" wrote in message
...
Spiderant wrote:


I think you will find most of this group will tell you that your
philosophy
professor is completely wrong.

Ian
--
Ian Bell


I posted this question, which has intrigued me for quite a few years, in
this newsgroup because it seems that a lot of the posters here know what
they're talking about. If someone would tell me a proper explanation as to
why my professor was wrong, I would really appreciate it.

But let me rephrase my question a bit. If a microphone is placed before an
orchestra, and the microphone is connected to an oscilloscope, from what I
know of oscilloscopes, the signal is not going to show every individual
instrument, but only the combined sounds coming from the orchestra.
Consequently, no matter what I do with that signal after it is recorded, and
even if I had as many speakers as instruments in an orchestra, I never again
break the signal up to reproduce the original instruments. The recording is
forever going to be only a shadow of the orchestra. Again, this seems quite
logical to me.

Now, as I believe Chris Morriss suggested in another posting, the diaphragm
of an ear is not unlike the diaphragm of a microphone. Consequently, when
listening to a live concert, I too would only hear the combined signal
coming from the orchestra. However, as I mentioned to Mr. Morriss, when we
go to a concert, it is not a static event. We're constantly turning our
heads and thereby altering the signal coming to our eardrums. Therefore,
even if we can only experience the combined signal while attending a live
recording, this shadow is constantly shifting and changing along with the
shifts of our heads and it becomes possible to discern the individual
instruments that a static recording can never reveal.

Again, please correct me if this analagy is incorrect.

Roland Goetz.






Kalman Rubinson November 26th 04 03:04 AM

The Outer Shell
 
On issue is that the oscilloscope is not showing you all the
information in the signal that allows the discrimination of individual
instruments and other tonal/spatial details. The scope only shows the
envelope of the total energy at a particular instant and not the
individual elements which contribute to that envelope. As a simple
example, compare the single instantaneous value on the scope with the
detailed information seen on a frequency analyzer at that same
instant. The ear is pretty good at a similar discrimination and
extracts more information than a simple oscilloscope.

Kal

On Fri, 26 Nov 2004 03:32:36 GMT, "Spiderant"
wrote:


"Ian Bell" wrote in message
...
Spiderant wrote:


I think you will find most of this group will tell you that your
philosophy
professor is completely wrong.

Ian
--
Ian Bell


I posted this question, which has intrigued me for quite a few years, in
this newsgroup because it seems that a lot of the posters here know what
they're talking about. If someone would tell me a proper explanation as to
why my professor was wrong, I would really appreciate it.

But let me rephrase my question a bit. If a microphone is placed before an
orchestra, and the microphone is connected to an oscilloscope, from what I
know of oscilloscopes, the signal is not going to show every individual
instrument, but only the combined sounds coming from the orchestra.
Consequently, no matter what I do with that signal after it is recorded, and
even if I had as many speakers as instruments in an orchestra, I never again
break the signal up to reproduce the original instruments. The recording is
forever going to be only a shadow of the orchestra. Again, this seems quite
logical to me.

Now, as I believe Chris Morriss suggested in another posting, the diaphragm
of an ear is not unlike the diaphragm of a microphone. Consequently, when
listening to a live concert, I too would only hear the combined signal
coming from the orchestra. However, as I mentioned to Mr. Morriss, when we
go to a concert, it is not a static event. We're constantly turning our
heads and thereby altering the signal coming to our eardrums. Therefore,
even if we can only experience the combined signal while attending a live
recording, this shadow is constantly shifting and changing along with the
shifts of our heads and it becomes possible to discern the individual
instruments that a static recording can never reveal.

Again, please correct me if this analagy is incorrect.

Roland Goetz.






Spiderant November 26th 04 03:42 AM

The Outer Shell
 
Because I don't have a frequency analyzer kicking around, I tried to come up
with some images to see what you are referring to (see this link:
http://www.softpicks.net/software/Fr...yzer-6079.htm).

I appreciate your explanation.

It certainly appears on the above screenshot that there is more happening at
a given moment than a momentary energy pulse. And if the screenshot is
correct, then what I always assumed was only a linear stream of pulses
coming from the microphone is in effect a multitude of simultaneous pulses.
And if, for example, this signal is digitized, then instead of a linear
series of plusses and minuses you're saying that there is, in effect, more
like a continuous stream of shot gun-style pepper blasts of multiple
silmutaneous frequencies. Hmmmm. I have a bit of a hard time grasping this
because it would imply that, once the signal got to a speaker cone, the
speaker cone would need to move in and out simultanously, which doesn't seem
possible. Could you elucidate further where my thinking is flawed?

Much appreciated,

Roland Goetz.


"Kalman Rubinson" wrote in message
...
On issue is that the oscilloscope is not showing you all the
information in the signal that allows the discrimination of individual
instruments and other tonal/spatial details. The scope only shows the
envelope of the total energy at a particular instant and not the
individual elements which contribute to that envelope. As a simple
example, compare the single instantaneous value on the scope with the
detailed information seen on a frequency analyzer at that same
instant. The ear is pretty good at a similar discrimination and
extracts more information than a simple oscilloscope.

Kal

On Fri, 26 Nov 2004 03:32:36 GMT, "Spiderant"
wrote:


"Ian Bell" wrote in message
...
Spiderant wrote:


I think you will find most of this group will tell you that your
philosophy
professor is completely wrong.

Ian
--
Ian Bell


I posted this question, which has intrigued me for quite a few years, in
this newsgroup because it seems that a lot of the posters here know what
they're talking about. If someone would tell me a proper explanation as
to
why my professor was wrong, I would really appreciate it.

But let me rephrase my question a bit. If a microphone is placed before
an
orchestra, and the microphone is connected to an oscilloscope, from what I
know of oscilloscopes, the signal is not going to show every individual
instrument, but only the combined sounds coming from the orchestra.
Consequently, no matter what I do with that signal after it is recorded,
and
even if I had as many speakers as instruments in an orchestra, I never
again
break the signal up to reproduce the original instruments. The recording
is
forever going to be only a shadow of the orchestra. Again, this seems
quite
logical to me.

Now, as I believe Chris Morriss suggested in another posting, the
diaphragm
of an ear is not unlike the diaphragm of a microphone. Consequently, when
listening to a live concert, I too would only hear the combined signal
coming from the orchestra. However, as I mentioned to Mr. Morriss, when
we
go to a concert, it is not a static event. We're constantly turning our
heads and thereby altering the signal coming to our eardrums. Therefore,
even if we can only experience the combined signal while attending a live
recording, this shadow is constantly shifting and changing along with the
shifts of our heads and it becomes possible to discern the individual
instruments that a static recording can never reveal.

Again, please correct me if this analagy is incorrect.

Roland Goetz.








Nick Gorham November 26th 04 06:23 AM

The Outer Shell
 
Spiderant wrote:
"Ian Bell" wrote in message
...

Spiderant wrote:



I think you will find most of this group will tell you that your
philosophy
professor is completely wrong.

Ian
--
Ian Bell



I posted this question, which has intrigued me for quite a few years, in
this newsgroup because it seems that a lot of the posters here know what
they're talking about. If someone would tell me a proper explanation as to
why my professor was wrong, I would really appreciate it.

But let me rephrase my question a bit. If a microphone is placed before an
orchestra, and the microphone is connected to an oscilloscope, from what I
know of oscilloscopes, the signal is not going to show every individual
instrument, but only the combined sounds coming from the orchestra.
Consequently, no matter what I do with that signal after it is recorded, and
even if I had as many speakers as instruments in an orchestra, I never again
break the signal up to reproduce the original instruments. The recording is
forever going to be only a shadow of the orchestra. Again, this seems quite
logical to me.

Now, as I believe Chris Morriss suggested in another posting, the diaphragm
of an ear is not unlike the diaphragm of a microphone. Consequently, when
listening to a live concert, I too would only hear the combined signal
coming from the orchestra. However, as I mentioned to Mr. Morriss, when we
go to a concert, it is not a static event. We're constantly turning our
heads and thereby altering the signal coming to our eardrums. Therefore,
even if we can only experience the combined signal while attending a live
recording, this shadow is constantly shifting and changing along with the
shifts of our heads and it becomes possible to discern the individual
instruments that a static recording can never reveal.

Again, please correct me if this analagy is incorrect.


Replace the single microphone with a crossed pair, and move your head
from side to side whilst listening via a stereo pair of speakers and you
will have the same thing. It may be differemnt though if you move your
head up and down, that information will not be recorded with a crossed pair.

You could attampt to apply the same argument to the individual
instruments, do you ever hear the guitar (for example), or just the
combined sound of the strings, fretboard, sounding box mouth, and the body.

--
Nick

Stewart Pinkerton November 26th 04 06:26 AM

The Outer Shell
 
On Fri, 26 Nov 2004 03:32:36 GMT, "Spiderant"
wrote:

I posted this question, which has intrigued me for quite a few years, in
this newsgroup because it seems that a lot of the posters here know what
they're talking about. If someone would tell me a proper explanation as to
why my professor was wrong, I would really appreciate it.

But let me rephrase my question a bit. If a microphone is placed before an
orchestra, and the microphone is connected to an oscilloscope, from what I
know of oscilloscopes, the signal is not going to show every individual
instrument, but only the combined sounds coming from the orchestra.
Consequently, no matter what I do with that signal after it is recorded, and
even if I had as many speakers as instruments in an orchestra, I never again
break the signal up to reproduce the original instruments. The recording is
forever going to be only a shadow of the orchestra. Again, this seems quite
logical to me.


You are forgetting one critical point in a modern recording - it's in
stereo. The very best, in accuracy terms, are made using minimalist
microphone techniques in real concert halls, and they can replicate
the ambience of the hall extremely well.

Now, as I believe Chris Morriss suggested in another posting, the diaphragm
of an ear is not unlike the diaphragm of a microphone. Consequently, when
listening to a live concert, I too would only hear the combined signal
coming from the orchestra. However, as I mentioned to Mr. Morriss, when we
go to a concert, it is not a static event. We're constantly turning our
heads and thereby altering the signal coming to our eardrums. Therefore,
even if we can only experience the combined signal while attending a live
recording, this shadow is constantly shifting and changing along with the
shifts of our heads and it becomes possible to discern the individual
instruments that a static recording can never reveal.


Given a good stereo recording, as described above, the soundfield
reaching your head will closely mimic that which would reach your ears
in the original concert hall at the microphone position, and sure
enough, you can 'focus' on individual performers by slight movement of
your head in the same way. The only real drawback is that, in a
top-class system playing such a 'minimalist' top-class recording, the
'sweet spot' is very small, and moving your head more than a couple of
inches from the bisector of the speakers will destroy the sharpness of
the imaging.

Again, please correct me if this analagy is incorrect.


It is incorrect, it should be 'this analogy'............ :-)

Besides, just *listen* to a good recording on a good system. One
careful observation is worth a thousand philosophical discussions!
--

Stewart Pinkerton | Music is Art - Audio is Engineering

Nick Gorham November 26th 04 06:30 AM

The Outer Shell
 
Kalman Rubinson wrote:
On issue is that the oscilloscope is not showing you all the
information in the signal that allows the discrimination of individual
instruments and other tonal/spatial details. The scope only shows the
envelope of the total energy at a particular instant and not the
individual elements which contribute to that envelope. As a simple
example, compare the single instantaneous value on the scope with the
detailed information seen on a frequency analyzer at that same
instant. The ear is pretty good at a similar discrimination and
extracts more information than a simple oscilloscope.



I would disagree slightly there, the oscilloscope is not extracting or
processing any information, its just showing a voltage against time
display, all the information in the signal is being displayed by the
scope, whats different is that our eye's are not able to extract and
process the information.

If you only have a single instantaneous value on the scope you have no
frequency information, and if you send that single value to a frequency
analyser it will show nothing, its just the difference between viewing
the signal in the time and frequency domain.

You could equally say, that the frequency display also doesn't display
all the information, as it provides no information about the phase
relationships between the various frequencies that it displays, whereas
this information is displayed by the scope.

--
Nick

Jim Lesurf November 26th 04 09:08 AM

The Outer Shell
 
In article xYwpd.329777$Pl.264539@pd7tw1no, Spiderant
wrote:

"Chris Morriss" wrote in message
...
In message MQcpd.321783$nl.260854@pd7tw3no, Spiderant
writes What a plonker he was. And what were his
views on the eardrum of the listener, (being a diaphragm etc:) --
Chris Morriss


I actually did think about that. When listening to a live performance,
all the music is hitting my eardrums silmutaneously (well, maybe not
silmutaneously as, from what I understand, some frequencies travel
faster than others).


No. In air, and at normal sound levels, the frequencies all travel at
essentially the same velocities.

Consequently, as per your suggestion, I would only hear the combined
instruments--but only if I held my head exactly the same way and,
perhaps, only if the musicians held perfectly still. But as soon as I
would turn my ears towards, say, the clarinets, then they would dominate
over the violins, and so on. And when the solo pianist would start to
play, I would turn my head towards him or her and the piano would
dominate. As a result, a live performance would seem much more
dimensional, would it not?


This depends upon how well a *stereo* (or 'surround') recording replicates
the original soundfield in terms of perception. Stereo is a 'trick' in the
sense that it does not set out to physically replicate the original
soundfield, but to give an effect which tricks our perception into thinking
we are hearing a convincing representation of that soundfield.


Since a recording can only play the combined signal from a stationary
point,


recordings can be made using a multiplicity of microphones, located in
various places. The replay can involve two or more speakers not located in
the same place.

regardless of how I would turn my head when listening to my
speakers, I don't see how I could distinquish the instruments in the
same way.


Please correct me if I'm wrong.


If you listen to good stereo recordings, played using good speakers, in a
suitable room acoustic, it is possible to get a 'stereo' effect that is a
fairly convincing impression of having the instruments laid out in front of
you as they would be at, say, an orchestral concert. Once you hear this,
you can decide for yourself that assuming it is not possible must be
incorrect. :-)

FWIW during the last week or so I did some minor fiddling about with the
audio system in the living room. This produced some apparent changes which
I think have improved the results. A consequence is that I enjoyed spending
time yesterday listening to;

1) CD-A of the Bartok Concerto for Orchestra performance (Mercury Living
Presence on Mercury 432 017-2)

2) CD-A of "English String Music" Barbirolli and the Sinfonia of London
(EMI CDC 7 47537 2) [I also have the later 'ART' re-issue, but tried the
earlier version on this occasion.]

Chose these simply as they are performances/recordings I have enjoyed in
the past, and fancied re-listening to them.

In both cases I had the distinct impression of quite a convincingly
realistic sound of instruments laid out in an acoustic space. No idea if
this is exactly what it sounded like at the time, but the results sounded
like a good directional image to me.

That said, I had to ensure the speakers and my head were in the 'right
places' to get the best effect. :-)

Slainte,

Jim

--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html

Jim Lesurf November 26th 04 09:13 AM

The Outer Shell
 
In article otxpd.338175$nl.283401@pd7tw3no, Spiderant
wrote:



But let me rephrase my question a bit. If a microphone is placed before
an orchestra, and the microphone is connected to an oscilloscope, from
what I know of oscilloscopes, the signal is not going to show every
individual instrument, but only the combined sounds coming from the
orchestra.


The signals from the different instruments will be linearly superimposed.
i.e. the information about all the instruments reaching the microphone
location will all be present at that point.


Consequently, no matter what I do with that signal after it
is recorded, and even if I had as many speakers as instruments in an
orchestra, I never again break the signal up to reproduce the original
instruments. The recording is forever going to be only a shadow of the
orchestra. Again, this seems quite logical to me.


Yes. The same would occur if your ear was at the microphone location. The
sound pressure at your ear would be the same linear superposition.

Hence the place where the sounds are 'broken up' again and identified is in
your ears/head in each case.

Now, as I believe Chris Morriss suggested in another posting, the
diaphragm of an ear is not unlike the diaphragm of a microphone.
Consequently, when listening to a live concert, I too would only hear
the combined signal coming from the orchestra. However, as I mentioned
to Mr. Morriss, when we go to a concert, it is not a static event.
We're constantly turning our heads and thereby altering the signal
coming to our eardrums. Therefore, even if we can only experience the
combined signal while attending a live recording, this shadow is
constantly shifting and changing along with the shifts of our heads and
it becomes possible to discern the individual instruments that a static
recording can never reveal.


Again, please correct me if this analagy is incorrect.


Yes. Please seem my comments elsewhere. 'Stereo' is essentially a 'trick'
which exploits the properties of human perception. How well it works in any
case depends upon the recording, the replay system (including the room),
and the individual.

My experience is that it can sometimes work very well, but in other cases
not at all. :-)

Slainte,

Jim

--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html

Jim Lesurf November 26th 04 09:17 AM

The Outer Shell
 
In article , Kalman
Rubinson
wrote:
On issue is that the oscilloscope is not showing you all the information
in the signal that allows the discrimination of individual instruments
and other tonal/spatial details. The scope only shows the envelope of
the total energy at a particular instant and not the individual elements
which contribute to that envelope. As a simple example, compare the
single instantaneous value on the scope with the detailed information
seen on a frequency analyzer at that same instant. The ear is pretty
good at a similar discrimination and extracts more information than a
simple oscilloscope.


Well, the oscilloscope display does not really 'extract information' beyond
showing you a level-time waveform pattern. Up to the viewer to make sense
of it. :-)

It is potentially misleading to compare a scope reading as a "single
instanteneous value" - i.e. at just one time instant - with a complete
spectrum. The same information is in the level-time waveform pattern for a
given time interval as is in the spectrum of that chunk of time. Thus a
finite extended time interval is required in both cases to compare "like
with like".

Slainte,

Jim

--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html

Jim Lesurf November 26th 04 04:23 PM

The Outer Shell
 
In article Ruypd.343810$%k.53935@pd7tw2no, Spiderant

wrote:
Because I don't have a frequency analyzer kicking around, I tried to
come up with some images to see what you are referring to (see this
link: http://www.softpicks.net/software/Fr...yzer-6079.htm).


I appreciate your explanation.


It certainly appears on the above screenshot that there is more
happening at a given moment than a momentary energy pulse. And if the
screenshot is correct, then what I always assumed was only a linear
stream of pulses coming from the microphone is in effect a multitude of
simultaneous pulses.


Not looked at the URL you give. However I suspect that thinking of audio
waveforms as 'streams of pulses' is probably quite a confusing and
inappropriate way to describe what is occurring. Analog waveforms are
nominally a smoothly varying level whose pattern of level-time fluctuations
conveys the information about the sounds being played/recorded/etc.


And if, for example, this signal is digitized, then instead of a linear
series of plusses and minuses you're saying that there is, in effect,
more like a continuous stream of shot gun-style pepper blasts of
multiple silmutaneous frequencies. Hmmmm. I have a bit of a hard time
grasping this because it would imply that, once the signal got to a
speaker cone, the speaker cone would need to move in and out
simultanously, which doesn't seem possible. Could you elucidate further
where my thinking is flawed?


The flaw is in your assumptions about both the analog and digital
representations of the signal patterns. This then leads to the "hard time"
you encounter in trying to see how your assumed process would work.

See the above for a simple description of analog waveforms. Digital ones
would need (normally) to be converted back into analog form to supply the
waveforms required by the speaker. Thus the microphone and speaker do not
produce/use 'pulses' but continually varying levels in the appropriate
patterns.

Slainte,

Jim

--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html

Kalman Rubinson November 26th 04 10:59 PM

The Outer Shell
 
Thanks for making a clearer statement than I did. The issue is,
however, that there is more information to be gleaned from the signal
than the OP is seeing when he looks at the 'scope.

Kal

On Fri, 26 Nov 2004 10:17:31 +0000 (GMT), Jim Lesurf
wrote:

In article , Kalman
Rubinson
wrote:
On issue is that the oscilloscope is not showing you all the information
in the signal that allows the discrimination of individual instruments
and other tonal/spatial details. The scope only shows the envelope of
the total energy at a particular instant and not the individual elements
which contribute to that envelope. As a simple example, compare the
single instantaneous value on the scope with the detailed information
seen on a frequency analyzer at that same instant. The ear is pretty
good at a similar discrimination and extracts more information than a
simple oscilloscope.


Well, the oscilloscope display does not really 'extract information' beyond
showing you a level-time waveform pattern. Up to the viewer to make sense
of it. :-)

It is potentially misleading to compare a scope reading as a "single
instanteneous value" - i.e. at just one time instant - with a complete
spectrum. The same information is in the level-time waveform pattern for a
given time interval as is in the spectrum of that chunk of time. Thus a
finite extended time interval is required in both cases to compare "like
with like".

Slainte,

Jim



Kalman Rubinson November 26th 04 11:02 PM

The Outer Shell
 
On Fri, 26 Nov 2004 04:42:25 GMT, "Spiderant"
wrote:

Because I don't have a frequency analyzer kicking around, I tried to come up
with some images to see what you are referring to (see this link:
http://www.softpicks.net/software/Fr...yzer-6079.htm).

I appreciate your explanation.

It certainly appears on the above screenshot that there is more happening at
a given moment than a momentary energy pulse. And if the screenshot is
correct, then what I always assumed was only a linear stream of pulses
coming from the microphone is in effect a multitude of simultaneous pulses.
And if, for example, this signal is digitized, then instead of a linear
series of plusses and minuses you're saying that there is, in effect, more
like a continuous stream of shot gun-style pepper blasts of multiple
silmutaneous frequencies. Hmmmm. I have a bit of a hard time grasping this
because it would imply that, once the signal got to a speaker cone, the
speaker cone would need to move in and out simultanously, which doesn't seem
possible. Could you elucidate further where my thinking is flawed?


I think your impression of digitization and the movement of the
speaker cone is simplistic. Before applying philosophical rigor to a
process, it might be a good idea to become technically informed about
that process. There are some textbooks. Perhaps others will chime in
on this.

Kal




Much appreciated,

Roland Goetz.


"Kalman Rubinson" wrote in message
.. .
On issue is that the oscilloscope is not showing you all the
information in the signal that allows the discrimination of individual
instruments and other tonal/spatial details. The scope only shows the
envelope of the total energy at a particular instant and not the
individual elements which contribute to that envelope. As a simple
example, compare the single instantaneous value on the scope with the
detailed information seen on a frequency analyzer at that same
instant. The ear is pretty good at a similar discrimination and
extracts more information than a simple oscilloscope.

Kal

On Fri, 26 Nov 2004 03:32:36 GMT, "Spiderant"
wrote:


"Ian Bell" wrote in message
...
Spiderant wrote:

I think you will find most of this group will tell you that your
philosophy
professor is completely wrong.

Ian
--
Ian Bell

I posted this question, which has intrigued me for quite a few years, in
this newsgroup because it seems that a lot of the posters here know what
they're talking about. If someone would tell me a proper explanation as
to
why my professor was wrong, I would really appreciate it.

But let me rephrase my question a bit. If a microphone is placed before
an
orchestra, and the microphone is connected to an oscilloscope, from what I
know of oscilloscopes, the signal is not going to show every individual
instrument, but only the combined sounds coming from the orchestra.
Consequently, no matter what I do with that signal after it is recorded,
and
even if I had as many speakers as instruments in an orchestra, I never
again
break the signal up to reproduce the original instruments. The recording
is
forever going to be only a shadow of the orchestra. Again, this seems
quite
logical to me.

Now, as I believe Chris Morriss suggested in another posting, the
diaphragm
of an ear is not unlike the diaphragm of a microphone. Consequently, when
listening to a live concert, I too would only hear the combined signal
coming from the orchestra. However, as I mentioned to Mr. Morriss, when
we
go to a concert, it is not a static event. We're constantly turning our
heads and thereby altering the signal coming to our eardrums. Therefore,
even if we can only experience the combined signal while attending a live
recording, this shadow is constantly shifting and changing along with the
shifts of our heads and it becomes possible to discern the individual
instruments that a static recording can never reveal.

Again, please correct me if this analagy is incorrect.

Roland Goetz.








Spiderant November 27th 04 07:29 PM

The Outer Shell
 

"Kalman Rubinson" wrote in message
...
On Fri, 26 Nov 2004 04:42:25 GMT, "Spiderant"
wrote:

I think your impression of digitization and the movement of the
speaker cone is simplistic. Before applying philosophical rigor to a
process, it might be a good idea to become technically informed about
that process. There are some textbooks. Perhaps others will chime in
on this.

Kal

I agree that my impression is simplistic. I'm probably extremely naive by
thinking that the information being sent from a microphone to a recording
device at any point of time as being anything more than a polarity
difference between two wires. This is where I think I've made my error. I
don't want to waste anyone's time here and I do appreciate the input. I'll
be hitting the library later on this afternoon to see if I can find some
basic technical information and, if I'm still confused, I may post the
question again at a later date.

Thanks for you input Kal.

Roland.



Spiderant November 27th 04 07:35 PM

The Outer Shell
 

"Nick Gorham" wrote in message
...
Kalman Rubinson wrote:
I would disagree slightly there, the oscilloscope is not extracting or
processing any information, its just showing a voltage against time
display, all the information in the signal is being displayed by the
scope, whats different is that our eye's are not able to extract and
process the information.

If you only have a single instantaneous value on the scope you have no
frequency information, and if you send that single value to a frequency
analyser it will show nothing, its just the difference between viewing the
signal in the time and frequency domain.

You could equally say, that the frequency display also doesn't display all
the information, as it provides no information about the phase
relationships between the various frequencies that it displays, whereas
this information is displayed by the scope.

--
Nick


Hello Nick,

I know this is a very basic question (and I will be hitting the library
today to see if I can improve my basic knowledge), but could you tell me
exactly what or how much information is going from a microphone to a
recording device? I always just assumed that, at any given point in time,
there nothing more than a polarity difference between two wires. If
possible, could you tell me what is happening at each given point?

Much appreciated,

Roland Goetz.



Nick Gorham November 27th 04 08:40 PM

The Outer Shell
 
Spiderant wrote:



Hello Nick,

I know this is a very basic question (and I will be hitting the library
today to see if I can improve my basic knowledge), but could you tell me
exactly what or how much information is going from a microphone to a
recording device? I always just assumed that, at any given point in time,
there nothing more than a polarity difference between two wires. If
possible, could you tell me what is happening at each given point?

Much appreciated,

Roland Goetz.



Hi,

I think Jim can do this much better, but the little I know. To actually
state how much informatoin is being sent, you need to know a couple of
things, the range of frequences being transmitted, and the signal to
noise ratio, given that you can actually calculate the amount of
information, Lookup Shannon in the text books. However, i don't you are
using the word information is such a formal sense.

The simple and quick answer, is yes, at a particular point in time there
is only a single voltage being produced by the source, but thats just
one part of the story, at a point in time just before that, the voltage
was at a different level, and at a point in the future it will be at yet
another voltage. So you could regard the signal as a sequence of
instantanious voltage levels, and the information is encoded in this
ever changing level.

To try and put it into context with your original question, consider two
instruments, lets use a pair of flutes, as they can produce nice pure
tones. If one flute is playing a A above middle C (just gessuing, don't
know the actual range of a flute), thats a 440hz sine wave (for the sake
of argument), that means the wave goes up, and then down and back again,
440 time sin s second. So if that was recorded and played back, the
speaker cone would folow the sine wave and move in and out 440 times a
second. Now if we play that recording, and look at in on a scope, we see
a continuious sine wave on the screen, thats showing the signal in the
time domain, it displays how the voltage changes with respect to time.

If we feed the same signal intp a spectrum analyser, we see a very
different display, we see a single line, at the frequency of 440hz, this
is showing the recording only contains a single frequency, that of
440hz, thats showing the signal in the frequency domain, how the signal
is conposed of frequences of sine waves.

Now lets take a second flute, and this time, play a A one octive above
the other, this is a note that has a frequency of 880hz (each musical
octave is a doubling in frequency). Now if we record both flutes playing
their notes together, and play tyhem back, the speaker, does not have to
move in and out 440 times a seconds and 880 times a second at the same
time, it moves to follow the signal thats the combined two frequencys.
(this would be so much simpler to show in person, with a copy of cool
edit). Then if we play this through a scope, the display shows a trace
thats the combination of the two frequencies, imagine a sime wave, that
while it wobbles up and down, is also wobbiling at twice the speed. And
if we feed the same signal into the spectrum analyser, now we see two
lines, one at 440 and one at 880, showing the signal is a combination of
the two seperate frequencies.

Not sure if any of the above makes sense of helps, but there you go.

In a way (and not to offend, we all had to learn this once), some of
this stuff, is so basic, that its hard to explain, you are just used to
it, and take it as read. Mind you its always good to go back to basics,
you often end up with a better understanding of something you thought
you fully understood allready :-)

--
Nick

Glenn Booth November 27th 04 10:03 PM

The Outer Shell
 
Hi,

In message 8y5qd.355388$Pl.15905@pd7tw1no, Spiderant
writes

I know this is a very basic question (and I will be hitting the library
today to see if I can improve my basic knowledge),


Start with "The acoustical foundations of music" by Bachus, ISBN:
0393090965. It does a very good job of explaining how musical
instruments create sound, and it also explores how various transducers
(such as ears and microphones) react to the pressure variations in air
caused by sounds.

but could you tell me
exactly what or how much information is going from a microphone to a
recording device? I always just assumed that, at any given point in time,
there nothing more than a polarity difference between two wires. If
possible, could you tell me what is happening at each given point?


It's not electrical polarity as such that we're interested in. You need
to take a step back, to pressure changes in air caused by instruments
making sound (small ones, sometimes happening quite fast).
Oversimplifying, sounds are caused by air molecules moving. They have
collisions, which make them move back and fore, causing local regions of
high pressure and low pressure, which propagate outwards as a pressure
wave at the 'speed of sound'[1], like ripples on a lake. When the
pressure wave reaches a microphone, the pressure variations cause
changes in voltage at the output of the microphone. Assuming a perfect
pure-pressure transducer (say, a really good omnidirectional microphone)
you get out of the microphone a varying voltage over time which gives a
representation of how the pressure wave of the sound varied over time. A
transducer changes one form of energy to another - in this case,
acoustic energy (sound) is changed to electrical energy.

So now we have a continuously changing electrical signal on a pair[2] of
wires. Now amplify it, if necessary, so that it can be fed to your
recording device. You can now (e.g.) sample and quantise the voltage
levels frequently enough and with enough precision to capture all the
needed information (e.g. to store it digitally) or you could use the
signal to change the magnetic properties of a strip of magnetic tape (to
use two examples). What you have 'recorded' is a record of the changes
in voltage over time that occurred due to changes in air pressure at a
specific point in space (where the microphone was placed).

If you decide you really want to know about all this stuff and you have
free time, look up TA225 (The technology of music) on the open
University web site. It's a bargain, but it will be better next year
(when it's finished!).

[Side note to Jim Lesurf - if you looking for some interesting work, get
in touch with the OU - the TA225 course started this year, and they
could use some help - far too many mistakes, some of which I am still
disputing with them, even after the exam!]

HTH.

[1] Whatever that is where you are.
[2] Sometimes. It could be more...
--
Regards,
Glenn Booth
Caveat: I've been drinking. Well, it's Saturday night.

Kalman Rubinson November 27th 04 10:32 PM

The Outer Shell
 
On Sat, 27 Nov 2004 23:03:18 +0000, Glenn Booth
wrote:

Hi,

In message 8y5qd.355388$Pl.15905@pd7tw1no, Spiderant
writes

I know this is a very basic question (and I will be hitting the library
today to see if I can improve my basic knowledge),


Start with "The acoustical foundations of music" by Bachus, ISBN:
0393090965. It does a very good job of explaining how musical
instruments create sound, and it also explores how various transducers
(such as ears and microphones) react to the pressure variations in air
caused by sounds..


An excellent suggestion! I had forgotten about that book and your
note reminded me to retrieve it and put it at the top of my re-read
list.

FWIW, the spelling on my copy is 'Backus.' Too much wine tonight? ;-)

Kal

Spiderant November 28th 04 12:38 AM

The Outer Shell
 

"Stewart Pinkerton" wrote in message
...
On Fri, 26 Nov 2004 03:32:36 GMT, "Spiderant"
wrote:
You are forgetting one critical point in a modern recording - it's in
stereo. The very best, in accuracy terms, are made using minimalist
microphone techniques in real concert halls, and they can replicate
the ambience of the hall extremely well.

I have thought about this. I also understand (I think) how stereo
microphones and subsequently speakers would help create the illusion of
three dimensional sound (sort of like those Viewmaster 3D Viewers we all
played with as kids, but for ears). I used a single microphone as an
example just to keep what I wanted to express simple. Although it seems
logical that you would have up to double the information for a stereo setup
than from a mono source, I'm still not grasping why the original signal(s)
would contain more than the peripheral information of frequency extremes at
any given point in time.

But I'm also realizing at this point that the response I am looking for is
probably much too basic for this newsgroup. As posted elsewhere, I assumed
that a signal at a given point in time contains no more information than a
simple polarity difference between two wires. From what other posters are
telling me, I'm way off, which is why I hit the library today to do some
basic research.

Given a good stereo recording, as described above, the soundfield
reaching your head will closely mimic that which would reach your ears
in the original concert hall at the microphone position, and sure
enough, you can 'focus' on individual performers by slight movement of
your head in the same way. The only real drawback is that, in a
top-class system playing such a 'minimalist' top-class recording, the
'sweet spot' is very small, and moving your head more than a couple of
inches from the bisector of the speakers will destroy the sharpness of
the imaging.


I remember reading about how Eliahu Inbal was a strong proponent of dual
microphones. I have a CD of him conducting Mahler's 7th Symphony where he
is using only two microphones. I'm actually listening to this as I'm
writing. If you have any recommendations of good recordings using dual
mikes, I'm sure that more newsgroup readers than I would appreciate hearing
about them.

Unfortunately, because my in-laws live below us, I'm relegated to listening
to most of my music through headphones, which means that although the sweet
spot never varies, but the in-the-head stereophonic image is not optimal.

Again, please correct me if this analagy is incorrect.


It is incorrect, it should be 'this analogy'............ :-)


Thanks for the correction. I'm not a frequent poster to newsgroups and I'm
used to Word spellchecking my documents before I send them. I'll try and
remember to use the spellchecker in Outlook before I post.

Besides, just *listen* to a good recording on a good system. One
careful observation is worth a thousand philosophical discussions!


Totally agree. The trick is to find the good system while on a tight
budget. Again, any recommendations for good recordings are always
appreciated, as are most of your postings in general.

Regards,

Roland Goetz.


Stewart Pinkerton | Music is Art - Audio is Engineering




Spiderant November 28th 04 12:50 AM

The Outer Shell
 

"Jim Lesurf" wrote in message
...
In article otxpd.338175$nl.283401@pd7tw3no, Spiderant
wrote:


The signals from the different instruments will be linearly superimposed.
i.e. the information about all the instruments reaching the microphone
location will all be present at that point.

Consequently, no matter what I do with that signal after it
is recorded, and even if I had as many speakers as instruments in an
orchestra, I never again break the signal up to reproduce the original
instruments. The recording is forever going to be only a shadow of the
orchestra. Again, this seems quite logical to me.


Yes. The same would occur if your ear was at the microphone location. The
sound pressure at your ear would be the same linear superposition.

Hence the place where the sounds are 'broken up' again and identified is
in
your ears/head in each case.

Now, as I believe Chris Morriss suggested in another posting, the
diaphragm of an ear is not unlike the diaphragm of a microphone.
Consequently, when listening to a live concert, I too would only hear
the combined signal coming from the orchestra. However, as I mentioned
to Mr. Morriss, when we go to a concert, it is not a static event.
We're constantly turning our heads and thereby altering the signal
coming to our eardrums. Therefore, even if we can only experience the
combined signal while attending a live recording, this shadow is
constantly shifting and changing along with the shifts of our heads and
it becomes possible to discern the individual instruments that a static
recording can never reveal.


Again, please correct me if this analagy is incorrect.


Yes. Please seem my comments elsewhere. 'Stereo' is essentially a 'trick'
which exploits the properties of human perception. How well it works in
any
case depends upon the recording, the replay system (including the room),
and the individual.

My experience is that it can sometimes work very well, but in other cases
not at all. :-)

Slainte,

Jim

--


I really appreciate your pointing me in the right direction in this and
previous posts. I've come to the realization that my understanding of basic
audio principles is very limited. I picked up some audio books from the
library, which I'll peruse before asking more questions. BTW Your previous
post about analog waveforms will be the focus of my research.

Out of curiousity Jim, why do you sign your emails with the term "Slainte"?
I live on the West Coast of Canada and I've never heard the word. What does
it mean?

Thanks again for your lucid and informative responses.

Regards,

Roland Goetz.



Spiderant November 28th 04 01:05 AM

A big thanks to all the posters
 
The many excellent responses to my original question have inspired me to do
some research on audio properties. In the interim, I'm still enjoying my
music, although I'm going through a horrible dilemma now as to whether I
prefer vinyl or CDs (as I talked about in the Neil Young thread). On my way
to the library this afternoon to pick up some books on audio, I stopped at
our local Salvation Army store and did something I haven't done in a long,
long time. I started browsing through their used LPs. I ended up picking
up a pristine copy of Toscanini conducting Beethoven's first and ninth
symphonies, as well as a LP of Richter playing some Beethoven piano sonatas.
At one twentieth the price of a CD for each LP, I figured I would try it.
Of course my wife crossed her arms and gave me a dirty (not in the nice way)
look when I came back home. Half a year ago I gave away a significant
portion of my LP collection to clear up some space for her plants and she's
not about to let me do some selective pruning. So, I guess that means I'll
be listening to music tonight. Oh well, things could be worse.

Thanks again to all the respondents in my favorite audio newsgroup.

Keep it lit,

Roland Goetz.


"Spiderant" wrote in message
news:MQcpd.321783$nl.260854@pd7tw3no...
I once had a philosophy professor who casually mentioned to the class that
when we listen to a recorded piece of music, we don't hear the entire
spectrum of the music, but only the outer shell. He explained that when,
for example, a classical symphony is recorded, only the extreme peaks and
valleys of the signal are picked up and when the recording is played back,
because the speakers can only move in one direction at any given moment,
you will only hear these peaks and valleys, and none of the filler
in-between. I know that I'm not explaining this using proper audio
terminology, but his explanation seems logical to me. If, for example, a
clarinet and a flute are playing at the same time, all we will ever hear
from the recording is the "combined" signal.

The result of this is that, no matter how good the recording is, we can
never truly hear the individual instruments which, of course, negates
things like "air" around the instruments (unless, of course, there is a
space between the actual notes). In fact, we can never hear the entire
orchestra, nor differentiate between the instruments playing. All we hear
is the shadow of the music.

If this idea is way off, please correct me. I have very little technical
knowledge, but I do love music. Any help would be greatly appreciated.

Roland Goetz.







Glenn Booth November 28th 04 07:05 AM

The Outer Shell
 
Hi,

In message , Kalman Rubinson
writes
On Sat, 27 Nov 2004 23:03:18 +0000, Glenn Booth
wrote:

Hi,

In message 8y5qd.355388$Pl.15905@pd7tw1no, Spiderant
writes

I know this is a very basic question (and I will be hitting the library
today to see if I can improve my basic knowledge),


Start with "The acoustical foundations of music" by Bachus, ISBN:
0393090965. It does a very good job of explaining how musical
instruments create sound, and it also explores how various transducers
(such as ears and microphones) react to the pressure variations in air
caused by sounds..


An excellent suggestion! I had forgotten about that book and your
note reminded me to retrieve it and put it at the top of my re-read
list.

FWIW, the spelling on my copy is 'Backus.' Too much wine tonight? ;-)


Heh ... an appropriate typo, I think :-) Thanks for the correction, and
yes, too much wine. A rather nice Rioja, and I explored the theory that
the liquid at the bottom of the bottle tastes better than that at the
top. I don't remember the results of my experiment, so I may have to
repeat it at some point.

--
Regards,
Glenn Booth

mick November 28th 04 07:44 AM

The Outer Shell
 
On Sun, 28 Nov 2004 01:38:59 +0000, Spiderant wrote:

snip

Unfortunately, because my in-laws live below us, I'm relegated to
listening to most of my music through headphones, which means that
although the sweet spot never varies, but the in-the-head stereophonic
image is not optimal.

snip

Have a look for "binaural" stuff. This is recorded using a dummy head and
can be almost frightningly convincing when listened to via headphones.
http://www.binaural.com/binfaq.html

For some more interesting headphone stuff look he
http://www.headwize.com/projects/
The "Signal Processors" heading has several designs for "enhancing"
headphone listening on normal stereo recordings. These remove the "hole in
the middle" effect that you sometimes get.

--
Mick
(no M$ software on here... :-) )
Web: http://www.nascom.info
Web: http://projectedsound.tk



Jim Lesurf November 28th 04 08:35 AM

The Outer Shell
 
In article , Kalman
Rubinson
wrote:
Thanks for making a clearer statement than I did. The issue is,
however, that there is more information to be gleaned from the signal
than the OP is seeing when he looks at the 'scope.


Agreed. :-) The problem here is encapsulated in your "seeing" as it seems
that the OP is not yet able to recognise the significance of what he would
see,

Slainte,

Jim

--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html

Eiron November 28th 04 08:46 AM

The Outer Shell
 
Spiderant wrote:

"Stewart Pinkerton" wrote in message
...

On Fri, 26 Nov 2004 03:32:36 GMT, "Spiderant"
wrote:
You are forgetting one critical point in a modern recording - it's in
stereo. The very best, in accuracy terms, are made using minimalist
microphone techniques in real concert halls, and they can replicate
the ambience of the hall extremely well.


I have thought about this. I also understand (I think) how stereo
microphones and subsequently speakers would help create the illusion of
three dimensional sound (sort of like those Viewmaster 3D Viewers we all
played with as kids, but for ears).


The sonic equivalent of the Viewmaster would be Binaural Stereo
or Dummy Head recording, used many years ago by the BBC for some
radio plays. There are a few records made with this technique
though the only ones I have are by Edgar Froese from the seventies.

I'm still not grasping why the original signal(s)
would contain more than the peripheral information of frequency extremes at
any given point in time.



The only "information" that your eardrum passes is its instantaneous
displacement just as a microphone does.


Given a good stereo recording, as described above, the soundfield
reaching your head will closely mimic that which would reach your ears
in the original concert hall at the microphone position, and sure
enough, you can 'focus' on individual performers by slight movement of
your head in the same way.


That's the problem with binaural stereo. It sounds as though your head
is stationary. It would be amusing to fit motion sensors to the phones
and actuators to the dummy head so that the microphones move as your
head does. This of course would only work for a single live session.

Unfortunately, because my in-laws live below us, I'm relegated to listening
to most of my music through headphones, which means that although the sweet
spot never varies, but the in-the-head stereophonic image is not optimal.


Some people deliberately add crosstalk/delay when listening to a normal
recording on phones to improve the image. I've not tried it myself.


--
Eiron.

Jim Lesurf November 28th 04 08:48 AM

The Outer Shell
 
In article , Nick Gorham
wrote:
Spiderant wrote:

[snip]


Hi,


I think Jim can do this much better, but the little I know. To actually
state how much informatoin is being sent, you need to know a couple of
things, the range of frequences being transmitted, and the signal to
noise ratio, given that you can actually calculate the amount of
information, Lookup Shannon in the text books. However, i don't you are
using the word information is such a formal sense.


FWIW the above looks fine to me. :-)

The simple and quick answer, is yes, at a particular point in time there
is only a single voltage being produced by the source, but thats just
one part of the story, at a point in time just before that, the voltage
was at a different level, and at a point in the future it will be at yet
another voltage. So you could regard the signal as a sequence of
instantanious voltage levels, and the information is encoded in this
ever changing level.


May be useful to add the following:

Think of the outer parts of the ears as being pressure detectors. These
pick up the way in which the sound pressure varies with time, and then
convey this pressure-time pattern (or 'waveform') into the inner ear.

The inner ear then examines and analyses the vibration waveform and can
symultaneously recognise many different details.

This isn't simply a matter of whether the pressure level is 'positive' or
'negative' at any one time. The precise shape of the waveform matters, and
tiny details or changes in the shape of the pressure-time patterns can
produce audible effects.

The microphones pick up the pressure-time patterns, and produce
voltage-time patterns which should have the same 'shape' and convey the
same 'details'. The amount of information carried depends upon how tiny a
detail may be conveyed and by how brief (in time) a detail can be conveyed.

The ability to convey tiny details is limited by noise. The ability to
convey brief details is limited by the range of frequencies the microphone,
etc, can respond to.

Hence all the details of the shapes matter, and the amount of info is
limited by the noise level (compared with the signal level we wish to
convey) and the bandwidth (range of frequencies) conveyed.

Of course, it is much more complicated than the above - that's why we still
all end up arging about it. :-)

Slainte,

Jim

--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html

Jim Lesurf November 28th 04 08:51 AM

The Outer Shell
 
In article , Glenn Booth
wrote:

[snip]


If you decide you really want to know about all this stuff and you have
free time, look up TA225 (The technology of music) on the open
University web site. It's a bargain, but it will be better next year
(when it's finished!).


[Side note to Jim Lesurf - if you looking for some interesting work, get
in touch with the OU - the TA225 course started this year, and they
could use some help - far too many mistakes, some of which I am still
disputing with them, even after the exam!]


You make me interested in the site, so I may well investigate at some
point.

Alas, I can't really do very much 'academic' work these days for the same
reason as had to take early retirement. :-/ Hence 'large' projects are
likely to be beyond me these days!

Slainte,

Jim

--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html

Stewart Pinkerton November 28th 04 08:58 AM

The Outer Shell
 
On Sun, 28 Nov 2004 01:50:23 GMT, "Spiderant"
wrote:

I really appreciate your pointing me in the right direction in this and
previous posts. I've come to the realization that my understanding of basic
audio principles is very limited. I picked up some audio books from the
library, which I'll peruse before asking more questions. BTW Your previous
post about analog waveforms will be the focus of my research.


Don't worry about it. Your willingness to learn places you very high
in the rankings of 'serious audiophiles'. It's always good to remember
that you should aleways keep an open mind, but be careful that your
brain does not fall out in the process! :-)

Out of curiousity Jim, why do you sign your emails with the term "Slainte"?
I live on the West Coast of Canada and I've never heard the word. What does
it mean?


Try some Scots Canadians! It's a Gaelic word meaning 'health', the
full expression is Slainte Mhor. Pronounced 'Slaandjivaa' It literally
means 'big health', but is taken as the ubiquitous 'cheers', and is
the appropriate toast for whisky drinkers.

As an aside, in Jacobean households during the early 18th century, the
'loyal' toast would often be said while passing the charged glass over
the top of the water jug, the toast being 'good health over the
water', a reference to the Pretenders to the Throne of Scotland, the
Stuarts who were in France at the time.
--

Stewart Pinkerton | Music is Art - Audio is Engineering

Jim Lesurf November 28th 04 09:00 AM

The Outer Shell
 
In article T_9qd.366396$nl.259331@pd7tw3no, Spiderant
wrote:
[snip]
I remember reading about how Eliahu Inbal was a strong proponent of dual
microphones. I have a CD of him conducting Mahler's 7th Symphony where
he is using only two microphones. I'm actually listening to this as
I'm writing. If you have any recommendations of good recordings using
dual mikes, I'm sure that more newsgroup readers than I would
appreciate hearing about them.


Some people do advocate various 'pure and simple' microphone techniques
like the above. There are two snags, though.

One is that such methods can be quite demanding on the skill of the
engineer, and the conductor - as well as on the acoustics of the recording
location. Thus it may give lovely results in some cases, but sound hopeless
in others.

FWIW The Bartok "Concerto for Orchestra" recording on Mercury I mentioned
in a recent posting is a Robert Fine/Wilma Cozart recording using just 3
microphones. Some of their 'Mercury' recordings (and lesser known ones that
used to be on the 'Pye' label) do employ 'simple' methods to get quite good
results. My copy of the Baryok is a CD-A Mercury 432 017-2, but I think it
may have been re-issued since then.

The other snag I mention below...

Unfortunately, because my in-laws live below us, I'm relegated to
listening to most of my music through headphones, which means that
although the sweet spot never varies, but the in-the-head stereophonic
image is not optimal.


This is the second snag. Most recordings tend to be produced assuming you
are listening via loudspeakers. Hence you may find that some recordings
sound excellent via speakers, but less satisfactory via headphones. :-/

[snip]

Totally agree. The trick is to find the good system while on a tight
budget.


Indeed. :-)

However if you are using headphones you can side-step one of the main
sources of bother/expense by not having to worry so much about the
loudspeakers and room acoustics.

Slainte,

Jim

--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html

Jim Lesurf November 28th 04 09:03 AM

The Outer Shell
 
In article z9aqd.366477$nl.121146@pd7tw3no, Spiderant
wrote:
[snip]


Out of curiousity Jim, why do you sign your emails with the term
"Slainte"? I live on the West Coast of Canada and I've never heard the
word. What does it mean?


It is Gaelic. It is part of a 'toast' which (approximately) says "Good
Health! Great Health!"

The English equivalent is "Cheers!" but since emigrating to Scotland I
decided that Slainte is a better and more appropriate word. :-)

Slainte,

Jim

--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html

Mike Gilmour November 28th 04 11:45 AM

A big thanks to all the posters
 

"Spiderant" wrote in message
news:8oaqd.357363$Pl.271729@pd7tw1no...
The many excellent responses to my original question have inspired me to
do some research on audio properties. In the interim, I'm still enjoying
my music, although I'm going through a horrible dilemma now as to whether
I prefer vinyl or CDs (as I talked about in the Neil Young thread). On my
way to the library this afternoon to pick up some books on audio, I
stopped at our local Salvation Army store and did something I haven't done
in a long, long time. I started browsing through their used LPs. I ended
up picking up a pristine copy of Toscanini conducting Beethoven's first
and ninth symphonies, as well as a LP of Richter playing some Beethoven
piano sonatas. At one twentieth the price of a CD for each LP, I figured I
would try it. Of course my wife crossed her arms and gave me a dirty (not
in the nice way) look when I came back home. Half a year ago I gave away
a significant portion of my LP collection to clear up some space for her
plants and she's not about to let me do some selective pruning. So, I
guess that means I'll be listening to music tonight. Oh well, things
could be worse.

Thanks again to all the respondents in my favorite audio newsgroup.

Keep it lit,

Roland Goetz.



Try to work it out so when your wife is picking up the groceries you are
delving through the charity shops. As you've already found there are some
good vinyl recordings still to be had at reasonable prices, though some
charity shops are getting wise to this - one shop referred to the 'Penguin
price guide for Record & CD Collectors' before charging!! I bear in mind
its for charity and its nearly Christmas...up to a point ;-)

Mike



Fleetie November 28th 04 01:55 PM

The Outer Shell
 
"Jim Lesurf" wrote
This isn't simply a matter of whether the pressure level is 'positive' or
'negative' at any one time. The precise shape of the waveform matters, and
tiny details or changes in the shape of the pressure-time patterns can
produce audible effects.


Well yeah but any waveform is just a sum of a load of sinusoidal waves
anyway, by Fourier.

Depends how you look at it.

Anyway, this whole thing is a bit more complex than pressure-versus-time
anyway, because sound is NOT perceived by inputting an electrical
representation of the pressure signal into some wetware "black box" which
does processing on the signal to work out what the sound is.

Rather, in the cochlea, there's a tube, with a bit running along the middle
of it, and a load of tiny hairs, and IIRC, different points along that
structure detect different frequencies, and each hair (or maybe proximate
small group of hairs) sends a signal down a nerve to a part of the brain.
So it's far from simple, to imagine what kind of processing may be going on,
with all those many, many inputs to the brain.

A computer would typically recognise sound (e.g. speech recognition) by
analysing ONE input signal. This is much simpler than what's going on in
our ears/brains, though ISTM that it's possible that our system loses some
phase information.


Martin
--
M.A.Poyser Tel.: 07967 110890
Manchester, U.K. http://www.fleetie.demon.co.uk




All times are GMT. The time now is 01:07 PM.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
SEO by vBSEO 3.0.0
Copyright ©2004-2006 AudioBanter.co.uk