View Single Post
  #606 (permalink)  
Old February 7th 12, 04:15 PM posted to sci.electronics.repair,uk.rec.audio,uk.tech.broadcast
Ian Jackson[_2_]
external usenet poster
 
Posts: 136
Default Audio Precision System One Dual Domani Measuirement Systems

In message , David Looser
writes
"Arny Krueger" wrote in message
m...

"William Sommerwerck" wrote in message
...
BTW AM sound was always used with +ve vision modulation. I'm not sure
that
there was a killer reason why FM could not have been used with +ve
vision
modulation, but intercarrier reception (the cheap'n'easy way to receive
FM
sound with TV) wouldn't work with +ve vision modulation unless there
was
significant carrier amplitude remaining at the sync tips. Normally with
+ve
vision modulation the carrier amplitude at sync tips was nominally
zero.

Early US TV sets used separate video and audio IFs -- intercarrier had
not
been thought of at that point.

My understanding is that "inverted" polarity was used to minimize the
effects of noise bursts on the sync pulses.


That's a good part of it. The net purpose of inverted polarity was to
improve subjective dynamic range. White flecks on a grey background are
far less obvious than black ones.

Umm..No. You've both got it the wrong way round. With -ve polarity sync
pulses are more affected by noise bursts than with +ve polarity. And white
flecks are far more obvious than black. Part of the reason is that impulse
interference could greatly exceed the 100% vision carrier level, saturating
the video amplifier and, with +ve modulation, the CRT.

This was why US TVs, where -ve modulation was used from the beginning,
employed flywheel sync very early on, whilst UK TVs didn't. On the other
hand UK TVs needed peak-white limiters to prevent the CRT defocusing on to
the "whiter-than-white" interference specs.

The real benefit of -ve modulation was AGC. With -ve modulation sync tips
correspond to 100% modulation and make an easy source for the AGC bias. With
+ve modulation sync tips are at zero carrier which obviously is useless for
AGC. Instead the back-porch has to be used and many different weird and
wonderful circuits were devised to "gate out" the signal voltage during the
back porch. Due to the need to keep costs down manufacturers increasingly
turned to "mean-level AGC" in which the video signal itself was simply
low-pass filtered to form the AGC bias. This lead to receiver gain being
varied by the video content, so the black on low-key scenes was boosted
whilst the whites in high-key scenes were reduced leading to a general
greyness to everything. To me it looked awful but as the Great British
Public kept buying these sets (and they were cheaper to build) mean-level
AGC became the norm for B&W UK domestic TV receivers. One great advantage of
colour was that mean-level AGC could not be used, to give correct colour
values colour sets *had* to display a picture with a stable black-level.

Even with negative video modulation, it didn't seem to take the
Americans long to realise that they could cut costs by using AC coupling
in the video amplifier between the video detector and the CRT. [I've got
some old US monochrome TV circuits which definitely show AC coupling.]
As a result, the benefits of having an AGC line which didn't vary (much)
with video content would be essentially lost.

Regarding using the back porch as the signal reference, and deriving the
AGC from it, I recall a Wireless World article in around 1967,
describing a simple add-on circuit (which I made) which partly did this.
It worked both on 405 and 626-line signals. It wasn't intended to
improve the horrible mean-level AGC but, at the start of each video
line, it did clamp the video drive (to the cathode of the CRT) to the
black reference of the back porch. As a result, you still got the
contrast varying with video content (maybe not so much on 625), but at
least the black stayed (more-or-less) black.
--
Ian