![]() |
CEP and overshoots was Dual mono vs. mono mono interrogative...
In article , Don Pearce
wrote: On Sun, 08 Nov 2009 10:03:34 +0000 (GMT), Jim Lesurf wrote: I've been wondering if you might like to help with an experiment to check this out. If I were to put up a couple of short LPCM WAV files of, say, a 'waveform from hell' or an offset impulse, could you look at them with CEP and find the peaks in the zoomed in reconstruction waveforms? Yes of course. Go ahead - I'll be interested. OK. I've now put a small zip at http://jcgl.orpheusweb.co.uk/temp/WaveFromHell.zip This should contain two 'CDDA format' wav files, each of about 6MB and 35 seconds duration. Details as below. However because the data in the files is highly periodic (no dithering) they compress down into a 53K zip. The data essentially consists of the same 140 pairs of values over and over as the waveform cycles. N.B. I've created these in a quick and rough way so I'm not certain they are correct. They looked OK when I checked them. BUT I have only recently started doing new versions of my audio file creation/processing/analysis software to be able to handle Wave formats. Until a few months ago it was convenient to use a raw data format for my audio files. But since starting to use Linux alongside RISC OS I've started producing new versions of my existing software that can work with Wave files. I've also started doing Linux versions of the main applications.[1] That said the two files are both in stereo 44.1k format. They also have a top and tail of a few seconds of zeros (silence). One file then has a '0dB' mono (i.e. both channels the same) of the 'Waveform From Hell'. This should probably generate the largest out-of-range spikes you've ever seen for LPCM data. :-) Note I've fractionally underscaled to avoid actually getting the genuine sample values for max to avoid problems with any software that doesn't understand that sample values aren't normal integers so flips the sign for one range max value. IIRC I went under max by one bit, but my memory on that may be wrong as I wrote the code to generate the waveform some time ago. The other file has the waveform reduced by 10dB and is in antiphase to make the two waveforms show up when displayed. (You may fined the mono file only shows as one line as the channels are identical.) If you play these though an audio system I have to wash my hands of any responsibility for damage! :-) Tweeters in particular may not like them. Nor a poor amp driving a capacitative load. Hope the above files are OK. If not, I'll have another go at making decent files. Cheers, Jim [1] If anyone is interested these will all be put on the website for free use and contain all the 'C' source code so people can examine how they work to look for bugs, etc. -- Please use the address on the audiomisc page if you wish to email me. Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html Audio Misc http://www.audiomisc.co.uk/index.html |
Dual mono vs. mono mono interrogative...
In article , Don Pearce
wrote: What Iain is describing here is, I think probably not related to the visual display. When setting levels it is not usual practice to look at the screen and judge when the waveform is close enough to the peak. You use the maths within the programme and choose your peak level that way. Then you will listen to what you have created before saving it. OK. If a job reaches the mastering DAW with gross clipping as Iain describes, then that procedure has not been followed and something nasty has gone on. At a guess I would say that all the work was done in floating point, which is a good idea as you don't have to dither intermediate stages (that noise can build up), then the final requantisation was done without regard to the fact that the peaks were well above FS. Just my guess. That is also quite interesting. Although to be honest I'd say that dithering is still theoretically needed even for floating point since the values are still quantised into a finite set. Problem then is that dither and noise shaping become more compilicated as you have a NICAM-like process to deal with. But if they are using something like IEEE doubles this isn't likely to be much of a problem in reality! FWIW all the internal calculations in my own programs tend to use IEEE double unless the process is trivial enough not to need this. This is because I have in the past found that even single float isn't enough to avoid some artifiact problems for some cases. In fact, when doing an analysis of SACD a few years ago I was starting to think that even normal doubles weren't enough to examine the problems with DSD. However since SACD essentially faded away I decided that not many people would have cared about the problems it had anyway! :-) Slainte, Jim -- Please use the address on the audiomisc page if you wish to email me. Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html Audio Misc http://www.audiomisc.co.uk/index.html |
CEP and overshoots was Dual mono vs. mono mono interrogative...
On Sun, 08 Nov 2009 11:26:00 +0000 (GMT), Jim Lesurf
wrote: In article , Don Pearce wrote: On Sun, 08 Nov 2009 10:03:34 +0000 (GMT), Jim Lesurf wrote: I've been wondering if you might like to help with an experiment to check this out. If I were to put up a couple of short LPCM WAV files of, say, a 'waveform from hell' or an offset impulse, could you look at them with CEP and find the peaks in the zoomed in reconstruction waveforms? Yes of course. Go ahead - I'll be interested. OK. I've now put a small zip at http://jcgl.orpheusweb.co.uk/temp/WaveFromHell.zip This should contain two 'CDDA format' wav files, each of about 6MB and 35 seconds duration. Details as below. However because the data in the files is highly periodic (no dithering) they compress down into a 53K zip. The data essentially consists of the same 140 pairs of values over and over as the waveform cycles. N.B. I've created these in a quick and rough way so I'm not certain they are correct. They looked OK when I checked them. BUT I have only recently started doing new versions of my audio file creation/processing/analysis software to be able to handle Wave formats. Until a few months ago it was convenient to use a raw data format for my audio files. But since starting to use Linux alongside RISC OS I've started producing new versions of my existing software that can work with Wave files. I've also started doing Linux versions of the main applications.[1] That said the two files are both in stereo 44.1k format. They also have a top and tail of a few seconds of zeros (silence). One file then has a '0dB' mono (i.e. both channels the same) of the 'Waveform From Hell'. This should probably generate the largest out-of-range spikes you've ever seen for LPCM data. :-) Note I've fractionally underscaled to avoid actually getting the genuine sample values for max to avoid problems with any software that doesn't understand that sample values aren't normal integers so flips the sign for one range max value. IIRC I went under max by one bit, but my memory on that may be wrong as I wrote the code to generate the waveform some time ago. The other file has the waveform reduced by 10dB and is in antiphase to make the two waveforms show up when displayed. (You may fined the mono file only shows as one line as the channels are identical.) If you play these though an audio system I have to wash my hands of any responsibility for damage! :-) Tweeters in particular may not like them. Nor a poor amp driving a capacitative load. Hope the above files are OK. If not, I'll have another go at making decent files. Cheers, Jim [1] If anyone is interested these will all be put on the website for free use and contain all the 'C' source code so people can examine how they work to look for bugs, etc. I've had a look, and again created a couple of flash screen dumps so you can see how CEP (actually Audition) handles the various phases of zoom. http://81.174.169.10/odds/WFH0dBMono.html http://81.174.169.10/odds/WFH-10dBAntiphase.html d |
Dual mono vs. mono mono interrogative...
"Iain Churches" wrote in message ... "Keith G" wrote in message ... "Iain Churches" wrote in message ... "Keith G" wrote in message ... And don't joke about Des O Connor's Greatest Hits - I'm sure that's kicking about somewhere around here, or has done in the past!! Yes, Des O'Connor CBE, He probably still lives in that wacking great house down in Sussex and drives his maroon and grey turbo Bentley. :-) Poor chap :-(( Yes, never underestimate the power of the *ample-bosomed matron* bloc to make or break anyone's career in the entertainment industry! No strong feelings either way about the bloke myself - not my sort of thing by a country mile, but good luck to him anyway!! Another one of the Old School who has achieved nobility through longevity is Bruce Forsyth - same difference and good luck to him also!!l I agree. Nothing succeeds like success:-) Remember the famous Liberace quote: Once I used to go laughing all the way to the bank. Now I own it" Now it's probably billions in debt and *we* get to own it, but the best thing I remember about Liberace was when he demonstrated 'hemidemisemiquavers' on the piano - quite stunning! |
Dual mono vs. mono mono interrogative...
"Jim Lesurf" wrote in message ... In article , Keith G wrote: "Don Pearce" wrote No, that is doing pretty much the same thing. Apart from the fact that Sound Forge clearly just joins the dots with straight lines, while Audition makes curves. OK, but at what 'zoom'? - Sound Forge smooths out at 6:1 as per: http://www.moirac.adsl24.co.uk/shown...thwaveform.jpg Can't see any samples represented on that so no idea what it is doing. But I like that SF isn't too smoothed out, it makes it easier to do creative editing - normally straightening out the waveform, but here's a bit of hi-res 24:1 editing going fairly deliberately the other way: http://www.moirac.adsl24.co.uk/shown...edwaveform.jpg As with your earlier example that seems to just 'join dots' with straight lines. Which is almost worthless as a representation of the actual waveform you'd get from a correctly working player. For the kinds of examples I've been discussing the results would be quite different. So if you were using software with such a display you'd have to keep all the samples below -5dBFS to be certain no out-of-range peaks were being produced on replay - even though the display didn't show them. Although in practice you'd *probably* be safe with waveforms that hadn't already been boogered if you kept well below -2dBFS. That's fine if you were aware of this problem. But if not, using such a display to adjust/edit the sound and make it 'louder' would be bad news for the listeners. ?? Hey, hijack the thread and take it where you want; that's normal for Usenet newsgroups, but I'd suggest it's unsafe to presume the OP (me) is automatically interested where you go with it - my interest in the comparative resolution available in the editing softwares mentioned is not, in this instance, anything to do with sound levels. |
Dual mono vs. mono mono interrogative...
On Sun, 8 Nov 2009 12:02:36 -0000, "Keith G"
wrote: "Jim Lesurf" wrote in message ... In article , Keith G wrote: "Don Pearce" wrote No, that is doing pretty much the same thing. Apart from the fact that Sound Forge clearly just joins the dots with straight lines, while Audition makes curves. OK, but at what 'zoom'? - Sound Forge smooths out at 6:1 as per: http://www.moirac.adsl24.co.uk/shown...thwaveform.jpg Can't see any samples represented on that so no idea what it is doing. But I like that SF isn't too smoothed out, it makes it easier to do creative editing - normally straightening out the waveform, but here's a bit of hi-res 24:1 editing going fairly deliberately the other way: http://www.moirac.adsl24.co.uk/shown...edwaveform.jpg As with your earlier example that seems to just 'join dots' with straight lines. Which is almost worthless as a representation of the actual waveform you'd get from a correctly working player. For the kinds of examples I've been discussing the results would be quite different. So if you were using software with such a display you'd have to keep all the samples below -5dBFS to be certain no out-of-range peaks were being produced on replay - even though the display didn't show them. Although in practice you'd *probably* be safe with waveforms that hadn't already been boogered if you kept well below -2dBFS. That's fine if you were aware of this problem. But if not, using such a display to adjust/edit the sound and make it 'louder' would be bad news for the listeners. ?? Hey, hijack the thread and take it where you want; that's normal for Usenet newsgroups, but I'd suggest it's unsafe to presume the OP (me) is automatically interested where you go with it - my interest in the comparative resolution available in the editing softwares mentioned is not, in this instance, anything to do with sound levels. I think your original bit of the thread is long since answered and put to bed. It has, as threads do, taken on a new life and new direction which was prompted by your original one. So don't think of this as hijacking, which it would have been if it had happened at - say - post number 2. Rather it is evolution and metamorphosis. d |
Dual mono vs. mono mono interrogative...
"Jim Lesurf" wrote in message
In article , Don Pearce wrote: On Fri, 6 Nov 2009 21:12:14 +0200, "Iain Churches" wrote: Is that so? I didn't know it was unique. I really only use Audition because I have sort of grown up with it throughout its CoolEdit incarnations, and I now use it more or less by instinct.'' But AFAIK this Centre Channel Extractor does not exist in CEP Pro (or at least in the beta testers version that I am familiar with) Did it only appear once Adobe bought it? I'm glad they did something more useful than just making the interface "pretty". Interesting. I own Audition 2.0 but never adapted to the new interface well enough to shift over to using it as my daily driver. Pardon me for hijacking this thread, but the mentions of CEP prompt me to ask a question about it. I don't use CEP or know anything about how it works. However in a thread on a couple of tv/broadcasting technical groups I've been discussing the problem of intersample peaks that can produce 'overshoots' that can go above 0dBFS of someone scales up the samples to be too close to 0dBFS. True. This can happen in the real world, too. I've been told that CEP shows the shape inbetween samples if you 'zoom in' and that it uses an approx to the formally correct sinc function to do this. I know of no authoritative discussions of CEP or Audition internals. |
Dual mono vs. mono mono interrogative...
"Don Pearce" wrote in message ... On Sat, 7 Nov 2009 16:29:30 -0000, "Keith G" wrote: "Don Pearce" wrote in message ... On Sat, 7 Nov 2009 15:40:16 -0000, "Keith G" wrote: "Don Pearce" wrote So-many-to-1 just isn't relevant as a figure. What you see at what zoom will depend on how long the original bit of music was. Hmm, that's not my understanding of how the programme works but, whatever, it's academic to me - zoomimg in and out is simply a question of spinning the (mouse) wheel back and forth to show me what amd how much I want to see on the screen. It's very fast and easy to do in SF; the only constraint is that the zoom factor has to be 1:32, or bigger, to be able to use the pencil tool. What you need from your software is what Audition tells you down the bottom. The exact start and stop times of the visible window and whatever you have selected inside it. You can see that easily without the box when it is zoomed in far enough to show the sample points. Now, as for that second edit waveform, I'm afraid it shows the limitations of Sound Forge. Like how? (Not that I GAS - I'm not selling it for a living or anything...??) Because what you see on the screen bears only the most passing resemblance to what emerges from the DAC. And in Audition, I don't think anyone would ever bother to "paint" out a click. Quick and easy to do in SF but I'm picking up Izotope RX in a little while and that may well alter the way I do things - my only interest in 'sound editing' atm is cutting whole side/whole disc LP recordings into tracks, trimmimg them and removing bothersome pops and clicks. If Izotope can do that well enough ('de-clicking' software I tried in the misty past was pretty much NFG) it will alter the way I do things and obviously speed up the workflow.... (Except that I don't *need* the workflow speeded up....???) When I digitize vinyl I tend not to trim (except at the finish of a side, I leave the needle drop in place). Nice touch! Get an 'auto return' deck for that lovely little 'syonara/see you later' lift-off sound!! :-) I leave the inter-track spaces exactly as they are, and just drop in zero delay track markers to separate them. That way I can play what sounds exactly like a whole side of the original lp. OK, but no good for me - I need individual numbered and titled tracks that can be found in a search. I frequently know a track I want to hear but don't know/can't remember which album it's from!! (An album is a folder with individual track files in it; if I want to play the whole disc I just hit 'Play All' - I cba with playlists and the like!) The tracks are still numbered, named and findable. It is just that when you play the whole thing you don't hear the joins. It sounds exactly like the LP. If I want to hear 'exactly like the LP' - I play the LP..!! ;-) This is only for ripping to CD, you understand. Why waste time with 'hyphen technology'? - You need one of these: http://www.brennan.co.uk/home/ On the PC they are like yours - files in a folder. Messing with the 'vinyl digitisations' is no chore to me - I get to listen (over and over, if I want or just let a track run) as I cut up the 'sausage string', trim the individual tracks to the right 'lead in' and lead out' lengths, add fades where I want and mute intertrack 'dead wax' surface noise so the **quiet bits aren't noisy**!! (Nutter Allison, are you reading this? :-) d |
Dual mono vs. mono mono interrogative...
"Iain Churches" wrote in message
"Jim Lesurf" wrote in message ... FWIW what prompted this was someone saying it was a good idea to always normalise so the max came to -0.5dBFS. I was then pointing out this could be a mistake if you only looked at the sample values - for reasons shown on the page I reference above. The question then became, what does CEP actually display? Does it show the user a waveform that would allows them to see if this problem was causing their output to exceed 0dBFS or not for arbitrary waveforms? Personally, I would not trust software like CEP at anywhere close to OdBFS. The voice of inexperience speaks! I've used CEP extensively for the better part of a decade, for recording and technical testing. It is generally trustworthy - certainly more trustworthy than many of the people who denigrate it. I have seen .wav files made on CEP (which the person who made them claims are *clean*) that are considerable clipped when uploaded to to mastering DAW. A person with common sense would assign the probable blame to the person making the assertions. |
Dual mono vs. mono mono interrogative...
"Iain Churches" wrote in message
If it had been overshot a couple of dB with the traditional 10 or even 6dB headroom no harm would have been done. There is no headroom in the digital domain. As it was, his material was rejected. Interesting that you would publicly criticize a well-known software product based on a producer whose recordings you wouldn't trust. |
All times are GMT. The time now is 07:58 PM. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
SEO by vBSEO 3.0.0
Copyright ©2004-2006 AudioBanter.co.uk