![]() |
The Outer Shell
On issue is that the oscilloscope is not showing you all the
information in the signal that allows the discrimination of individual instruments and other tonal/spatial details. The scope only shows the envelope of the total energy at a particular instant and not the individual elements which contribute to that envelope. As a simple example, compare the single instantaneous value on the scope with the detailed information seen on a frequency analyzer at that same instant. The ear is pretty good at a similar discrimination and extracts more information than a simple oscilloscope. Kal On Fri, 26 Nov 2004 03:32:36 GMT, "Spiderant" wrote: "Ian Bell" wrote in message ... Spiderant wrote: I think you will find most of this group will tell you that your philosophy professor is completely wrong. Ian -- Ian Bell I posted this question, which has intrigued me for quite a few years, in this newsgroup because it seems that a lot of the posters here know what they're talking about. If someone would tell me a proper explanation as to why my professor was wrong, I would really appreciate it. But let me rephrase my question a bit. If a microphone is placed before an orchestra, and the microphone is connected to an oscilloscope, from what I know of oscilloscopes, the signal is not going to show every individual instrument, but only the combined sounds coming from the orchestra. Consequently, no matter what I do with that signal after it is recorded, and even if I had as many speakers as instruments in an orchestra, I never again break the signal up to reproduce the original instruments. The recording is forever going to be only a shadow of the orchestra. Again, this seems quite logical to me. Now, as I believe Chris Morriss suggested in another posting, the diaphragm of an ear is not unlike the diaphragm of a microphone. Consequently, when listening to a live concert, I too would only hear the combined signal coming from the orchestra. However, as I mentioned to Mr. Morriss, when we go to a concert, it is not a static event. We're constantly turning our heads and thereby altering the signal coming to our eardrums. Therefore, even if we can only experience the combined signal while attending a live recording, this shadow is constantly shifting and changing along with the shifts of our heads and it becomes possible to discern the individual instruments that a static recording can never reveal. Again, please correct me if this analagy is incorrect. Roland Goetz. |
The Outer Shell
Because I don't have a frequency analyzer kicking around, I tried to come up
with some images to see what you are referring to (see this link: http://www.softpicks.net/software/Fr...yzer-6079.htm). I appreciate your explanation. It certainly appears on the above screenshot that there is more happening at a given moment than a momentary energy pulse. And if the screenshot is correct, then what I always assumed was only a linear stream of pulses coming from the microphone is in effect a multitude of simultaneous pulses. And if, for example, this signal is digitized, then instead of a linear series of plusses and minuses you're saying that there is, in effect, more like a continuous stream of shot gun-style pepper blasts of multiple silmutaneous frequencies. Hmmmm. I have a bit of a hard time grasping this because it would imply that, once the signal got to a speaker cone, the speaker cone would need to move in and out simultanously, which doesn't seem possible. Could you elucidate further where my thinking is flawed? Much appreciated, Roland Goetz. "Kalman Rubinson" wrote in message ... On issue is that the oscilloscope is not showing you all the information in the signal that allows the discrimination of individual instruments and other tonal/spatial details. The scope only shows the envelope of the total energy at a particular instant and not the individual elements which contribute to that envelope. As a simple example, compare the single instantaneous value on the scope with the detailed information seen on a frequency analyzer at that same instant. The ear is pretty good at a similar discrimination and extracts more information than a simple oscilloscope. Kal On Fri, 26 Nov 2004 03:32:36 GMT, "Spiderant" wrote: "Ian Bell" wrote in message ... Spiderant wrote: I think you will find most of this group will tell you that your philosophy professor is completely wrong. Ian -- Ian Bell I posted this question, which has intrigued me for quite a few years, in this newsgroup because it seems that a lot of the posters here know what they're talking about. If someone would tell me a proper explanation as to why my professor was wrong, I would really appreciate it. But let me rephrase my question a bit. If a microphone is placed before an orchestra, and the microphone is connected to an oscilloscope, from what I know of oscilloscopes, the signal is not going to show every individual instrument, but only the combined sounds coming from the orchestra. Consequently, no matter what I do with that signal after it is recorded, and even if I had as many speakers as instruments in an orchestra, I never again break the signal up to reproduce the original instruments. The recording is forever going to be only a shadow of the orchestra. Again, this seems quite logical to me. Now, as I believe Chris Morriss suggested in another posting, the diaphragm of an ear is not unlike the diaphragm of a microphone. Consequently, when listening to a live concert, I too would only hear the combined signal coming from the orchestra. However, as I mentioned to Mr. Morriss, when we go to a concert, it is not a static event. We're constantly turning our heads and thereby altering the signal coming to our eardrums. Therefore, even if we can only experience the combined signal while attending a live recording, this shadow is constantly shifting and changing along with the shifts of our heads and it becomes possible to discern the individual instruments that a static recording can never reveal. Again, please correct me if this analagy is incorrect. Roland Goetz. |
The Outer Shell
Spiderant wrote:
"Ian Bell" wrote in message ... Spiderant wrote: I think you will find most of this group will tell you that your philosophy professor is completely wrong. Ian -- Ian Bell I posted this question, which has intrigued me for quite a few years, in this newsgroup because it seems that a lot of the posters here know what they're talking about. If someone would tell me a proper explanation as to why my professor was wrong, I would really appreciate it. But let me rephrase my question a bit. If a microphone is placed before an orchestra, and the microphone is connected to an oscilloscope, from what I know of oscilloscopes, the signal is not going to show every individual instrument, but only the combined sounds coming from the orchestra. Consequently, no matter what I do with that signal after it is recorded, and even if I had as many speakers as instruments in an orchestra, I never again break the signal up to reproduce the original instruments. The recording is forever going to be only a shadow of the orchestra. Again, this seems quite logical to me. Now, as I believe Chris Morriss suggested in another posting, the diaphragm of an ear is not unlike the diaphragm of a microphone. Consequently, when listening to a live concert, I too would only hear the combined signal coming from the orchestra. However, as I mentioned to Mr. Morriss, when we go to a concert, it is not a static event. We're constantly turning our heads and thereby altering the signal coming to our eardrums. Therefore, even if we can only experience the combined signal while attending a live recording, this shadow is constantly shifting and changing along with the shifts of our heads and it becomes possible to discern the individual instruments that a static recording can never reveal. Again, please correct me if this analagy is incorrect. Replace the single microphone with a crossed pair, and move your head from side to side whilst listening via a stereo pair of speakers and you will have the same thing. It may be differemnt though if you move your head up and down, that information will not be recorded with a crossed pair. You could attampt to apply the same argument to the individual instruments, do you ever hear the guitar (for example), or just the combined sound of the strings, fretboard, sounding box mouth, and the body. -- Nick |
The Outer Shell
On Fri, 26 Nov 2004 03:32:36 GMT, "Spiderant"
wrote: I posted this question, which has intrigued me for quite a few years, in this newsgroup because it seems that a lot of the posters here know what they're talking about. If someone would tell me a proper explanation as to why my professor was wrong, I would really appreciate it. But let me rephrase my question a bit. If a microphone is placed before an orchestra, and the microphone is connected to an oscilloscope, from what I know of oscilloscopes, the signal is not going to show every individual instrument, but only the combined sounds coming from the orchestra. Consequently, no matter what I do with that signal after it is recorded, and even if I had as many speakers as instruments in an orchestra, I never again break the signal up to reproduce the original instruments. The recording is forever going to be only a shadow of the orchestra. Again, this seems quite logical to me. You are forgetting one critical point in a modern recording - it's in stereo. The very best, in accuracy terms, are made using minimalist microphone techniques in real concert halls, and they can replicate the ambience of the hall extremely well. Now, as I believe Chris Morriss suggested in another posting, the diaphragm of an ear is not unlike the diaphragm of a microphone. Consequently, when listening to a live concert, I too would only hear the combined signal coming from the orchestra. However, as I mentioned to Mr. Morriss, when we go to a concert, it is not a static event. We're constantly turning our heads and thereby altering the signal coming to our eardrums. Therefore, even if we can only experience the combined signal while attending a live recording, this shadow is constantly shifting and changing along with the shifts of our heads and it becomes possible to discern the individual instruments that a static recording can never reveal. Given a good stereo recording, as described above, the soundfield reaching your head will closely mimic that which would reach your ears in the original concert hall at the microphone position, and sure enough, you can 'focus' on individual performers by slight movement of your head in the same way. The only real drawback is that, in a top-class system playing such a 'minimalist' top-class recording, the 'sweet spot' is very small, and moving your head more than a couple of inches from the bisector of the speakers will destroy the sharpness of the imaging. Again, please correct me if this analagy is incorrect. It is incorrect, it should be 'this analogy'............ :-) Besides, just *listen* to a good recording on a good system. One careful observation is worth a thousand philosophical discussions! -- Stewart Pinkerton | Music is Art - Audio is Engineering |
The Outer Shell
Kalman Rubinson wrote:
On issue is that the oscilloscope is not showing you all the information in the signal that allows the discrimination of individual instruments and other tonal/spatial details. The scope only shows the envelope of the total energy at a particular instant and not the individual elements which contribute to that envelope. As a simple example, compare the single instantaneous value on the scope with the detailed information seen on a frequency analyzer at that same instant. The ear is pretty good at a similar discrimination and extracts more information than a simple oscilloscope. I would disagree slightly there, the oscilloscope is not extracting or processing any information, its just showing a voltage against time display, all the information in the signal is being displayed by the scope, whats different is that our eye's are not able to extract and process the information. If you only have a single instantaneous value on the scope you have no frequency information, and if you send that single value to a frequency analyser it will show nothing, its just the difference between viewing the signal in the time and frequency domain. You could equally say, that the frequency display also doesn't display all the information, as it provides no information about the phase relationships between the various frequencies that it displays, whereas this information is displayed by the scope. -- Nick |
The Outer Shell
In article xYwpd.329777$Pl.264539@pd7tw1no, Spiderant
wrote: "Chris Morriss" wrote in message ... In message MQcpd.321783$nl.260854@pd7tw3no, Spiderant writes What a plonker he was. And what were his views on the eardrum of the listener, (being a diaphragm etc:) -- Chris Morriss I actually did think about that. When listening to a live performance, all the music is hitting my eardrums silmutaneously (well, maybe not silmutaneously as, from what I understand, some frequencies travel faster than others). No. In air, and at normal sound levels, the frequencies all travel at essentially the same velocities. Consequently, as per your suggestion, I would only hear the combined instruments--but only if I held my head exactly the same way and, perhaps, only if the musicians held perfectly still. But as soon as I would turn my ears towards, say, the clarinets, then they would dominate over the violins, and so on. And when the solo pianist would start to play, I would turn my head towards him or her and the piano would dominate. As a result, a live performance would seem much more dimensional, would it not? This depends upon how well a *stereo* (or 'surround') recording replicates the original soundfield in terms of perception. Stereo is a 'trick' in the sense that it does not set out to physically replicate the original soundfield, but to give an effect which tricks our perception into thinking we are hearing a convincing representation of that soundfield. Since a recording can only play the combined signal from a stationary point, recordings can be made using a multiplicity of microphones, located in various places. The replay can involve two or more speakers not located in the same place. regardless of how I would turn my head when listening to my speakers, I don't see how I could distinquish the instruments in the same way. Please correct me if I'm wrong. If you listen to good stereo recordings, played using good speakers, in a suitable room acoustic, it is possible to get a 'stereo' effect that is a fairly convincing impression of having the instruments laid out in front of you as they would be at, say, an orchestral concert. Once you hear this, you can decide for yourself that assuming it is not possible must be incorrect. :-) FWIW during the last week or so I did some minor fiddling about with the audio system in the living room. This produced some apparent changes which I think have improved the results. A consequence is that I enjoyed spending time yesterday listening to; 1) CD-A of the Bartok Concerto for Orchestra performance (Mercury Living Presence on Mercury 432 017-2) 2) CD-A of "English String Music" Barbirolli and the Sinfonia of London (EMI CDC 7 47537 2) [I also have the later 'ART' re-issue, but tried the earlier version on this occasion.] Chose these simply as they are performances/recordings I have enjoyed in the past, and fancied re-listening to them. In both cases I had the distinct impression of quite a convincingly realistic sound of instruments laid out in an acoustic space. No idea if this is exactly what it sounded like at the time, but the results sounded like a good directional image to me. That said, I had to ensure the speakers and my head were in the 'right places' to get the best effect. :-) Slainte, Jim -- Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html |
The Outer Shell
In article otxpd.338175$nl.283401@pd7tw3no, Spiderant
wrote: But let me rephrase my question a bit. If a microphone is placed before an orchestra, and the microphone is connected to an oscilloscope, from what I know of oscilloscopes, the signal is not going to show every individual instrument, but only the combined sounds coming from the orchestra. The signals from the different instruments will be linearly superimposed. i.e. the information about all the instruments reaching the microphone location will all be present at that point. Consequently, no matter what I do with that signal after it is recorded, and even if I had as many speakers as instruments in an orchestra, I never again break the signal up to reproduce the original instruments. The recording is forever going to be only a shadow of the orchestra. Again, this seems quite logical to me. Yes. The same would occur if your ear was at the microphone location. The sound pressure at your ear would be the same linear superposition. Hence the place where the sounds are 'broken up' again and identified is in your ears/head in each case. Now, as I believe Chris Morriss suggested in another posting, the diaphragm of an ear is not unlike the diaphragm of a microphone. Consequently, when listening to a live concert, I too would only hear the combined signal coming from the orchestra. However, as I mentioned to Mr. Morriss, when we go to a concert, it is not a static event. We're constantly turning our heads and thereby altering the signal coming to our eardrums. Therefore, even if we can only experience the combined signal while attending a live recording, this shadow is constantly shifting and changing along with the shifts of our heads and it becomes possible to discern the individual instruments that a static recording can never reveal. Again, please correct me if this analagy is incorrect. Yes. Please seem my comments elsewhere. 'Stereo' is essentially a 'trick' which exploits the properties of human perception. How well it works in any case depends upon the recording, the replay system (including the room), and the individual. My experience is that it can sometimes work very well, but in other cases not at all. :-) Slainte, Jim -- Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html |
The Outer Shell
In article , Kalman
Rubinson wrote: On issue is that the oscilloscope is not showing you all the information in the signal that allows the discrimination of individual instruments and other tonal/spatial details. The scope only shows the envelope of the total energy at a particular instant and not the individual elements which contribute to that envelope. As a simple example, compare the single instantaneous value on the scope with the detailed information seen on a frequency analyzer at that same instant. The ear is pretty good at a similar discrimination and extracts more information than a simple oscilloscope. Well, the oscilloscope display does not really 'extract information' beyond showing you a level-time waveform pattern. Up to the viewer to make sense of it. :-) It is potentially misleading to compare a scope reading as a "single instanteneous value" - i.e. at just one time instant - with a complete spectrum. The same information is in the level-time waveform pattern for a given time interval as is in the spectrum of that chunk of time. Thus a finite extended time interval is required in both cases to compare "like with like". Slainte, Jim -- Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html |
The Outer Shell
In article Ruypd.343810$%k.53935@pd7tw2no, Spiderant
wrote: Because I don't have a frequency analyzer kicking around, I tried to come up with some images to see what you are referring to (see this link: http://www.softpicks.net/software/Fr...yzer-6079.htm). I appreciate your explanation. It certainly appears on the above screenshot that there is more happening at a given moment than a momentary energy pulse. And if the screenshot is correct, then what I always assumed was only a linear stream of pulses coming from the microphone is in effect a multitude of simultaneous pulses. Not looked at the URL you give. However I suspect that thinking of audio waveforms as 'streams of pulses' is probably quite a confusing and inappropriate way to describe what is occurring. Analog waveforms are nominally a smoothly varying level whose pattern of level-time fluctuations conveys the information about the sounds being played/recorded/etc. And if, for example, this signal is digitized, then instead of a linear series of plusses and minuses you're saying that there is, in effect, more like a continuous stream of shot gun-style pepper blasts of multiple silmutaneous frequencies. Hmmmm. I have a bit of a hard time grasping this because it would imply that, once the signal got to a speaker cone, the speaker cone would need to move in and out simultanously, which doesn't seem possible. Could you elucidate further where my thinking is flawed? The flaw is in your assumptions about both the analog and digital representations of the signal patterns. This then leads to the "hard time" you encounter in trying to see how your assumed process would work. See the above for a simple description of analog waveforms. Digital ones would need (normally) to be converted back into analog form to supply the waveforms required by the speaker. Thus the microphone and speaker do not produce/use 'pulses' but continually varying levels in the appropriate patterns. Slainte, Jim -- Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html |
The Outer Shell
Thanks for making a clearer statement than I did. The issue is,
however, that there is more information to be gleaned from the signal than the OP is seeing when he looks at the 'scope. Kal On Fri, 26 Nov 2004 10:17:31 +0000 (GMT), Jim Lesurf wrote: In article , Kalman Rubinson wrote: On issue is that the oscilloscope is not showing you all the information in the signal that allows the discrimination of individual instruments and other tonal/spatial details. The scope only shows the envelope of the total energy at a particular instant and not the individual elements which contribute to that envelope. As a simple example, compare the single instantaneous value on the scope with the detailed information seen on a frequency analyzer at that same instant. The ear is pretty good at a similar discrimination and extracts more information than a simple oscilloscope. Well, the oscilloscope display does not really 'extract information' beyond showing you a level-time waveform pattern. Up to the viewer to make sense of it. :-) It is potentially misleading to compare a scope reading as a "single instanteneous value" - i.e. at just one time instant - with a complete spectrum. The same information is in the level-time waveform pattern for a given time interval as is in the spectrum of that chunk of time. Thus a finite extended time interval is required in both cases to compare "like with like". Slainte, Jim |
All times are GMT. The time now is 12:52 PM. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
SEO by vBSEO 3.0.0
Copyright ©2004-2006 AudioBanter.co.uk