In article , Roderick
Stewart wrote:
In article , Bill Taylor
wrote:
AFAIR from a course that I did sometime ago DigiBeta doesn't record
the raw digits, it does a discrete cosine transform on the picture
information and then records the result of that transform. Most of the
time this is completely reversible and so lossless, but on very
testing material the higher orders can be thrown away, so some very
slight loss could be experienced. In reality the format is pretty much
lossless. Multiple passes have been done experimentally and the
perceptible degradation is either very low or none existent, unlike
analogue formats or some of the more highly compressed formats that
are used these days.
I've only had the sales talk, not a technical course, so I never got a
completely satisfactory explanation for how it is possible for a digital
bit rate compression system with any loss whatsoever to allow a signal
to pass through an unlimited number of times without cumulative losses
showing after a few dozen passes. Yet they showed us some special
effects that required several hundred passes, and split screen displays
of severalhundredth generation against the original, with no visible
degradation at all, and they assured us that none was measureable.
I can't comment specifically on DigiBeta as I know nowt about it.
However it is possible in principle for what you describe to be the case.
This would be if the data thinning always used exactly the same rulesand
the data was otherwise unmodified between 'passes'. The point being that
the first data thinning removes the data that is 'unwanted'. Subsequent
thinnings would find that the data they'd remove is already absent, so they
are happy with that and pass what remains... :-)
Interpreting what the salesman was able to tell us, I think the system
must only be applying bit rate compression to those parts of the signal
that go above a "threshold" value coresponding to the maximum rate that
it can handle (50 Mb/s?), but passing it untouched the rest of the time.
This would result in the second and subsequent passes being passed
unchanged because they had already been compressed once. Even that "very
testing material" would only suffer whatever losses resulted from the
first pass, and could then be copied an infinite number of times without
becoming any worse. Perhaps somebody who knows more about it could
confirm this?
In principle, yes, the above would be a consequence of what I describe
above. The thinning works out what features have the 'priority' required to
not be thinned. On the later passes it then finds that this turns out to be
*all* the features in the data set, and nothing remains to be removed as it
has already gone.
If, however, the data is changed (by noise, distortion, or deliberate
alteration) between successive thinnings then the above may not apply. It
also assumes that identical 'rules' are applied for every thinning.
Slainte,
Jim
--
Electronics
http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc
http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio
http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc.
http://www.st-and.demon.co.uk/JBSoc/JBSoc.html