A Audio, hi-fi and car audio  forum. Audio Banter

Go Back   Home » Audio Banter forum » UK Audio Newsgroups » uk.rec.audio (General Audio and Hi-Fi)
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

uk.rec.audio (General Audio and Hi-Fi) (uk.rec.audio) Discussion and exchange of hi-fi audio equipment.

The Outer Shell



 
 
LinkBack Thread Tools Display Modes
  #21 (permalink)  
Old November 26th 04, 11:02 PM posted to uk.rec.audio
Kalman Rubinson
external usenet poster
 
Posts: 214
Default The Outer Shell

On Fri, 26 Nov 2004 04:42:25 GMT, "Spiderant"
wrote:

Because I don't have a frequency analyzer kicking around, I tried to come up
with some images to see what you are referring to (see this link:
http://www.softpicks.net/software/Fr...yzer-6079.htm).

I appreciate your explanation.

It certainly appears on the above screenshot that there is more happening at
a given moment than a momentary energy pulse. And if the screenshot is
correct, then what I always assumed was only a linear stream of pulses
coming from the microphone is in effect a multitude of simultaneous pulses.
And if, for example, this signal is digitized, then instead of a linear
series of plusses and minuses you're saying that there is, in effect, more
like a continuous stream of shot gun-style pepper blasts of multiple
silmutaneous frequencies. Hmmmm. I have a bit of a hard time grasping this
because it would imply that, once the signal got to a speaker cone, the
speaker cone would need to move in and out simultanously, which doesn't seem
possible. Could you elucidate further where my thinking is flawed?


I think your impression of digitization and the movement of the
speaker cone is simplistic. Before applying philosophical rigor to a
process, it might be a good idea to become technically informed about
that process. There are some textbooks. Perhaps others will chime in
on this.

Kal




Much appreciated,

Roland Goetz.


"Kalman Rubinson" wrote in message
.. .
On issue is that the oscilloscope is not showing you all the
information in the signal that allows the discrimination of individual
instruments and other tonal/spatial details. The scope only shows the
envelope of the total energy at a particular instant and not the
individual elements which contribute to that envelope. As a simple
example, compare the single instantaneous value on the scope with the
detailed information seen on a frequency analyzer at that same
instant. The ear is pretty good at a similar discrimination and
extracts more information than a simple oscilloscope.

Kal

On Fri, 26 Nov 2004 03:32:36 GMT, "Spiderant"
wrote:


"Ian Bell" wrote in message
...
Spiderant wrote:

I think you will find most of this group will tell you that your
philosophy
professor is completely wrong.

Ian
--
Ian Bell

I posted this question, which has intrigued me for quite a few years, in
this newsgroup because it seems that a lot of the posters here know what
they're talking about. If someone would tell me a proper explanation as
to
why my professor was wrong, I would really appreciate it.

But let me rephrase my question a bit. If a microphone is placed before
an
orchestra, and the microphone is connected to an oscilloscope, from what I
know of oscilloscopes, the signal is not going to show every individual
instrument, but only the combined sounds coming from the orchestra.
Consequently, no matter what I do with that signal after it is recorded,
and
even if I had as many speakers as instruments in an orchestra, I never
again
break the signal up to reproduce the original instruments. The recording
is
forever going to be only a shadow of the orchestra. Again, this seems
quite
logical to me.

Now, as I believe Chris Morriss suggested in another posting, the
diaphragm
of an ear is not unlike the diaphragm of a microphone. Consequently, when
listening to a live concert, I too would only hear the combined signal
coming from the orchestra. However, as I mentioned to Mr. Morriss, when
we
go to a concert, it is not a static event. We're constantly turning our
heads and thereby altering the signal coming to our eardrums. Therefore,
even if we can only experience the combined signal while attending a live
recording, this shadow is constantly shifting and changing along with the
shifts of our heads and it becomes possible to discern the individual
instruments that a static recording can never reveal.

Again, please correct me if this analagy is incorrect.

Roland Goetz.







  #22 (permalink)  
Old November 27th 04, 07:29 PM posted to uk.rec.audio
Spiderant
external usenet poster
 
Posts: 23
Default The Outer Shell


"Kalman Rubinson" wrote in message
...
On Fri, 26 Nov 2004 04:42:25 GMT, "Spiderant"
wrote:

I think your impression of digitization and the movement of the
speaker cone is simplistic. Before applying philosophical rigor to a
process, it might be a good idea to become technically informed about
that process. There are some textbooks. Perhaps others will chime in
on this.

Kal

I agree that my impression is simplistic. I'm probably extremely naive by
thinking that the information being sent from a microphone to a recording
device at any point of time as being anything more than a polarity
difference between two wires. This is where I think I've made my error. I
don't want to waste anyone's time here and I do appreciate the input. I'll
be hitting the library later on this afternoon to see if I can find some
basic technical information and, if I'm still confused, I may post the
question again at a later date.

Thanks for you input Kal.

Roland.


  #23 (permalink)  
Old November 27th 04, 07:35 PM posted to uk.rec.audio
Spiderant
external usenet poster
 
Posts: 23
Default The Outer Shell


"Nick Gorham" wrote in message
...
Kalman Rubinson wrote:
I would disagree slightly there, the oscilloscope is not extracting or
processing any information, its just showing a voltage against time
display, all the information in the signal is being displayed by the
scope, whats different is that our eye's are not able to extract and
process the information.

If you only have a single instantaneous value on the scope you have no
frequency information, and if you send that single value to a frequency
analyser it will show nothing, its just the difference between viewing the
signal in the time and frequency domain.

You could equally say, that the frequency display also doesn't display all
the information, as it provides no information about the phase
relationships between the various frequencies that it displays, whereas
this information is displayed by the scope.

--
Nick


Hello Nick,

I know this is a very basic question (and I will be hitting the library
today to see if I can improve my basic knowledge), but could you tell me
exactly what or how much information is going from a microphone to a
recording device? I always just assumed that, at any given point in time,
there nothing more than a polarity difference between two wires. If
possible, could you tell me what is happening at each given point?

Much appreciated,

Roland Goetz.


  #24 (permalink)  
Old November 27th 04, 08:40 PM posted to uk.rec.audio
Nick Gorham
external usenet poster
 
Posts: 851
Default The Outer Shell

Spiderant wrote:



Hello Nick,

I know this is a very basic question (and I will be hitting the library
today to see if I can improve my basic knowledge), but could you tell me
exactly what or how much information is going from a microphone to a
recording device? I always just assumed that, at any given point in time,
there nothing more than a polarity difference between two wires. If
possible, could you tell me what is happening at each given point?

Much appreciated,

Roland Goetz.



Hi,

I think Jim can do this much better, but the little I know. To actually
state how much informatoin is being sent, you need to know a couple of
things, the range of frequences being transmitted, and the signal to
noise ratio, given that you can actually calculate the amount of
information, Lookup Shannon in the text books. However, i don't you are
using the word information is such a formal sense.

The simple and quick answer, is yes, at a particular point in time there
is only a single voltage being produced by the source, but thats just
one part of the story, at a point in time just before that, the voltage
was at a different level, and at a point in the future it will be at yet
another voltage. So you could regard the signal as a sequence of
instantanious voltage levels, and the information is encoded in this
ever changing level.

To try and put it into context with your original question, consider two
instruments, lets use a pair of flutes, as they can produce nice pure
tones. If one flute is playing a A above middle C (just gessuing, don't
know the actual range of a flute), thats a 440hz sine wave (for the sake
of argument), that means the wave goes up, and then down and back again,
440 time sin s second. So if that was recorded and played back, the
speaker cone would folow the sine wave and move in and out 440 times a
second. Now if we play that recording, and look at in on a scope, we see
a continuious sine wave on the screen, thats showing the signal in the
time domain, it displays how the voltage changes with respect to time.

If we feed the same signal intp a spectrum analyser, we see a very
different display, we see a single line, at the frequency of 440hz, this
is showing the recording only contains a single frequency, that of
440hz, thats showing the signal in the frequency domain, how the signal
is conposed of frequences of sine waves.

Now lets take a second flute, and this time, play a A one octive above
the other, this is a note that has a frequency of 880hz (each musical
octave is a doubling in frequency). Now if we record both flutes playing
their notes together, and play tyhem back, the speaker, does not have to
move in and out 440 times a seconds and 880 times a second at the same
time, it moves to follow the signal thats the combined two frequencys.
(this would be so much simpler to show in person, with a copy of cool
edit). Then if we play this through a scope, the display shows a trace
thats the combination of the two frequencies, imagine a sime wave, that
while it wobbles up and down, is also wobbiling at twice the speed. And
if we feed the same signal into the spectrum analyser, now we see two
lines, one at 440 and one at 880, showing the signal is a combination of
the two seperate frequencies.

Not sure if any of the above makes sense of helps, but there you go.

In a way (and not to offend, we all had to learn this once), some of
this stuff, is so basic, that its hard to explain, you are just used to
it, and take it as read. Mind you its always good to go back to basics,
you often end up with a better understanding of something you thought
you fully understood allready :-)

--
Nick
  #25 (permalink)  
Old November 27th 04, 10:03 PM posted to uk.rec.audio
Glenn Booth
external usenet poster
 
Posts: 160
Default The Outer Shell

Hi,

In message 8y5qd.355388$Pl.15905@pd7tw1no, Spiderant
writes

I know this is a very basic question (and I will be hitting the library
today to see if I can improve my basic knowledge),


Start with "The acoustical foundations of music" by Bachus, ISBN:
0393090965. It does a very good job of explaining how musical
instruments create sound, and it also explores how various transducers
(such as ears and microphones) react to the pressure variations in air
caused by sounds.

but could you tell me
exactly what or how much information is going from a microphone to a
recording device? I always just assumed that, at any given point in time,
there nothing more than a polarity difference between two wires. If
possible, could you tell me what is happening at each given point?


It's not electrical polarity as such that we're interested in. You need
to take a step back, to pressure changes in air caused by instruments
making sound (small ones, sometimes happening quite fast).
Oversimplifying, sounds are caused by air molecules moving. They have
collisions, which make them move back and fore, causing local regions of
high pressure and low pressure, which propagate outwards as a pressure
wave at the 'speed of sound'[1], like ripples on a lake. When the
pressure wave reaches a microphone, the pressure variations cause
changes in voltage at the output of the microphone. Assuming a perfect
pure-pressure transducer (say, a really good omnidirectional microphone)
you get out of the microphone a varying voltage over time which gives a
representation of how the pressure wave of the sound varied over time. A
transducer changes one form of energy to another - in this case,
acoustic energy (sound) is changed to electrical energy.

So now we have a continuously changing electrical signal on a pair[2] of
wires. Now amplify it, if necessary, so that it can be fed to your
recording device. You can now (e.g.) sample and quantise the voltage
levels frequently enough and with enough precision to capture all the
needed information (e.g. to store it digitally) or you could use the
signal to change the magnetic properties of a strip of magnetic tape (to
use two examples). What you have 'recorded' is a record of the changes
in voltage over time that occurred due to changes in air pressure at a
specific point in space (where the microphone was placed).

If you decide you really want to know about all this stuff and you have
free time, look up TA225 (The technology of music) on the open
University web site. It's a bargain, but it will be better next year
(when it's finished!).

[Side note to Jim Lesurf - if you looking for some interesting work, get
in touch with the OU - the TA225 course started this year, and they
could use some help - far too many mistakes, some of which I am still
disputing with them, even after the exam!]

HTH.

[1] Whatever that is where you are.
[2] Sometimes. It could be more...
--
Regards,
Glenn Booth
Caveat: I've been drinking. Well, it's Saturday night.
  #26 (permalink)  
Old November 27th 04, 10:32 PM posted to uk.rec.audio
Kalman Rubinson
external usenet poster
 
Posts: 214
Default The Outer Shell

On Sat, 27 Nov 2004 23:03:18 +0000, Glenn Booth
wrote:

Hi,

In message 8y5qd.355388$Pl.15905@pd7tw1no, Spiderant
writes

I know this is a very basic question (and I will be hitting the library
today to see if I can improve my basic knowledge),


Start with "The acoustical foundations of music" by Bachus, ISBN:
0393090965. It does a very good job of explaining how musical
instruments create sound, and it also explores how various transducers
(such as ears and microphones) react to the pressure variations in air
caused by sounds..


An excellent suggestion! I had forgotten about that book and your
note reminded me to retrieve it and put it at the top of my re-read
list.

FWIW, the spelling on my copy is 'Backus.' Too much wine tonight? ;-)

Kal
  #27 (permalink)  
Old November 28th 04, 12:38 AM posted to uk.rec.audio
Spiderant
external usenet poster
 
Posts: 23
Default The Outer Shell


"Stewart Pinkerton" wrote in message
...
On Fri, 26 Nov 2004 03:32:36 GMT, "Spiderant"
wrote:
You are forgetting one critical point in a modern recording - it's in
stereo. The very best, in accuracy terms, are made using minimalist
microphone techniques in real concert halls, and they can replicate
the ambience of the hall extremely well.

I have thought about this. I also understand (I think) how stereo
microphones and subsequently speakers would help create the illusion of
three dimensional sound (sort of like those Viewmaster 3D Viewers we all
played with as kids, but for ears). I used a single microphone as an
example just to keep what I wanted to express simple. Although it seems
logical that you would have up to double the information for a stereo setup
than from a mono source, I'm still not grasping why the original signal(s)
would contain more than the peripheral information of frequency extremes at
any given point in time.

But I'm also realizing at this point that the response I am looking for is
probably much too basic for this newsgroup. As posted elsewhere, I assumed
that a signal at a given point in time contains no more information than a
simple polarity difference between two wires. From what other posters are
telling me, I'm way off, which is why I hit the library today to do some
basic research.

Given a good stereo recording, as described above, the soundfield
reaching your head will closely mimic that which would reach your ears
in the original concert hall at the microphone position, and sure
enough, you can 'focus' on individual performers by slight movement of
your head in the same way. The only real drawback is that, in a
top-class system playing such a 'minimalist' top-class recording, the
'sweet spot' is very small, and moving your head more than a couple of
inches from the bisector of the speakers will destroy the sharpness of
the imaging.


I remember reading about how Eliahu Inbal was a strong proponent of dual
microphones. I have a CD of him conducting Mahler's 7th Symphony where he
is using only two microphones. I'm actually listening to this as I'm
writing. If you have any recommendations of good recordings using dual
mikes, I'm sure that more newsgroup readers than I would appreciate hearing
about them.

Unfortunately, because my in-laws live below us, I'm relegated to listening
to most of my music through headphones, which means that although the sweet
spot never varies, but the in-the-head stereophonic image is not optimal.

Again, please correct me if this analagy is incorrect.


It is incorrect, it should be 'this analogy'............ :-)


Thanks for the correction. I'm not a frequent poster to newsgroups and I'm
used to Word spellchecking my documents before I send them. I'll try and
remember to use the spellchecker in Outlook before I post.

Besides, just *listen* to a good recording on a good system. One
careful observation is worth a thousand philosophical discussions!


Totally agree. The trick is to find the good system while on a tight
budget. Again, any recommendations for good recordings are always
appreciated, as are most of your postings in general.

Regards,

Roland Goetz.


Stewart Pinkerton | Music is Art - Audio is Engineering



  #28 (permalink)  
Old November 28th 04, 12:50 AM posted to uk.rec.audio
Spiderant
external usenet poster
 
Posts: 23
Default The Outer Shell


"Jim Lesurf" wrote in message
...
In article otxpd.338175$nl.283401@pd7tw3no, Spiderant
wrote:


The signals from the different instruments will be linearly superimposed.
i.e. the information about all the instruments reaching the microphone
location will all be present at that point.

Consequently, no matter what I do with that signal after it
is recorded, and even if I had as many speakers as instruments in an
orchestra, I never again break the signal up to reproduce the original
instruments. The recording is forever going to be only a shadow of the
orchestra. Again, this seems quite logical to me.


Yes. The same would occur if your ear was at the microphone location. The
sound pressure at your ear would be the same linear superposition.

Hence the place where the sounds are 'broken up' again and identified is
in
your ears/head in each case.

Now, as I believe Chris Morriss suggested in another posting, the
diaphragm of an ear is not unlike the diaphragm of a microphone.
Consequently, when listening to a live concert, I too would only hear
the combined signal coming from the orchestra. However, as I mentioned
to Mr. Morriss, when we go to a concert, it is not a static event.
We're constantly turning our heads and thereby altering the signal
coming to our eardrums. Therefore, even if we can only experience the
combined signal while attending a live recording, this shadow is
constantly shifting and changing along with the shifts of our heads and
it becomes possible to discern the individual instruments that a static
recording can never reveal.


Again, please correct me if this analagy is incorrect.


Yes. Please seem my comments elsewhere. 'Stereo' is essentially a 'trick'
which exploits the properties of human perception. How well it works in
any
case depends upon the recording, the replay system (including the room),
and the individual.

My experience is that it can sometimes work very well, but in other cases
not at all. :-)

Slainte,

Jim

--


I really appreciate your pointing me in the right direction in this and
previous posts. I've come to the realization that my understanding of basic
audio principles is very limited. I picked up some audio books from the
library, which I'll peruse before asking more questions. BTW Your previous
post about analog waveforms will be the focus of my research.

Out of curiousity Jim, why do you sign your emails with the term "Slainte"?
I live on the West Coast of Canada and I've never heard the word. What does
it mean?

Thanks again for your lucid and informative responses.

Regards,

Roland Goetz.


  #29 (permalink)  
Old November 28th 04, 01:05 AM posted to uk.rec.audio
Spiderant
external usenet poster
 
Posts: 23
Default A big thanks to all the posters

The many excellent responses to my original question have inspired me to do
some research on audio properties. In the interim, I'm still enjoying my
music, although I'm going through a horrible dilemma now as to whether I
prefer vinyl or CDs (as I talked about in the Neil Young thread). On my way
to the library this afternoon to pick up some books on audio, I stopped at
our local Salvation Army store and did something I haven't done in a long,
long time. I started browsing through their used LPs. I ended up picking
up a pristine copy of Toscanini conducting Beethoven's first and ninth
symphonies, as well as a LP of Richter playing some Beethoven piano sonatas.
At one twentieth the price of a CD for each LP, I figured I would try it.
Of course my wife crossed her arms and gave me a dirty (not in the nice way)
look when I came back home. Half a year ago I gave away a significant
portion of my LP collection to clear up some space for her plants and she's
not about to let me do some selective pruning. So, I guess that means I'll
be listening to music tonight. Oh well, things could be worse.

Thanks again to all the respondents in my favorite audio newsgroup.

Keep it lit,

Roland Goetz.


"Spiderant" wrote in message
news:MQcpd.321783$nl.260854@pd7tw3no...
I once had a philosophy professor who casually mentioned to the class that
when we listen to a recorded piece of music, we don't hear the entire
spectrum of the music, but only the outer shell. He explained that when,
for example, a classical symphony is recorded, only the extreme peaks and
valleys of the signal are picked up and when the recording is played back,
because the speakers can only move in one direction at any given moment,
you will only hear these peaks and valleys, and none of the filler
in-between. I know that I'm not explaining this using proper audio
terminology, but his explanation seems logical to me. If, for example, a
clarinet and a flute are playing at the same time, all we will ever hear
from the recording is the "combined" signal.

The result of this is that, no matter how good the recording is, we can
never truly hear the individual instruments which, of course, negates
things like "air" around the instruments (unless, of course, there is a
space between the actual notes). In fact, we can never hear the entire
orchestra, nor differentiate between the instruments playing. All we hear
is the shadow of the music.

If this idea is way off, please correct me. I have very little technical
knowledge, but I do love music. Any help would be greatly appreciated.

Roland Goetz.






  #30 (permalink)  
Old November 28th 04, 07:05 AM posted to uk.rec.audio
Glenn Booth
external usenet poster
 
Posts: 160
Default The Outer Shell

Hi,

In message , Kalman Rubinson
writes
On Sat, 27 Nov 2004 23:03:18 +0000, Glenn Booth
wrote:

Hi,

In message 8y5qd.355388$Pl.15905@pd7tw1no, Spiderant
writes

I know this is a very basic question (and I will be hitting the library
today to see if I can improve my basic knowledge),


Start with "The acoustical foundations of music" by Bachus, ISBN:
0393090965. It does a very good job of explaining how musical
instruments create sound, and it also explores how various transducers
(such as ears and microphones) react to the pressure variations in air
caused by sounds..


An excellent suggestion! I had forgotten about that book and your
note reminded me to retrieve it and put it at the top of my re-read
list.

FWIW, the spelling on my copy is 'Backus.' Too much wine tonight? ;-)


Heh ... an appropriate typo, I think :-) Thanks for the correction, and
yes, too much wine. A rather nice Rioja, and I explored the theory that
the liquid at the bottom of the bottle tastes better than that at the
top. I don't remember the results of my experiment, so I may have to
repeat it at some point.

--
Regards,
Glenn Booth
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On



All times are GMT. The time now is 03:14 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.SEO by vBSEO 3.0.0
Copyright ©2004-2025 Audio Banter.
The comments are property of their posters.