Audio Banter

Audio Banter (https://www.audiobanter.co.uk/forum.php)
-   uk.rec.audio (General Audio and Hi-Fi) (https://www.audiobanter.co.uk/uk-rec-audio-general-audio/)
-   -   ASA and Russ Andrews again;!... (https://www.audiobanter.co.uk/uk-rec-audio-general-audio/8348-asa-russ-andrews-again.html)

Don Pearce[_3_] January 16th 11 06:39 AM

ASA and Russ Andrews again;!...
 
On Sat, 15 Jan 2011 23:13:50 -0000, "David Looser"
wrote:

"Don Pearce" wrote

Separate clock only really works for short connections. Anything
longer and it is likely to lose phase with the signal - bad news.
Clock recovery from a data stream is so easy, and guaranteed to be
timed right, that I'm surprised that HDMI uses a separate clock line.


HDMI is intended for short distances, such from a set-top-box or BD player
to a TV, and it was developed from the DVI system intended to connect a
monitor to a computer, so possibly the developers felt that using a separate
clock line made sense.

Unfortunately HDMI has ended up with no fewer than 6 separate pairs in the
cable: 3 data pairs, one clock pair, the EDID pair which allows the exchange
of information on the capabilities of the sink, and the CEC pair for
optional control signals. Oh, and there's a 5V power supply pair as well.
The result is the need for a 19-pin plug with "early" and "late" mating
contacts. A complicated thing like that is either going to be expensive, or
unreliable. The HDMI developers went for unreliable. And because it's so
small it's near-enough impossible to replace in the field. So you have to
junk your expensive HD TV when the HDMI socket fails. :-(

That will be why new TVs tend to provide several HDMI inputs. You just
move on to the next when one has failed.

d

Don Pearce[_3_] January 16th 11 08:53 AM

ASA and Russ Andrews again;!...
 
On Sat, 15 Jan 2011 16:41:11 +0000 (GMT), Jim Lesurf
wrote:

In article , Don Pearce
wrote:
On Sat, 15 Jan 2011 11:41:02 +0000 (GMT), Jim Lesurf
wrote:



However I was picking up your unqualified statement that mathcing would
give a 'perfect output'. I agree the departure from perfection should
not normally be an *audible* problem. Indeed, if peoplw want to worry
about they should worry about the LF cable impedance departing from the
nominal value at high freqencies. :-)


I've always found it to be the other way round. Cable impedance is
pretty stable at high frequencies - it is only at low (kHz) frequencies
that the series and parallel resistance terms start driving the
impedance upwards.


Hence my comment about "LF cable impedance" above. :-)

Ah - when you wrote "departing from the nominal value at high
frequencies" I thought you meant that it departed at high frequencies.



d

Fed Up Lurker[_3_] January 17th 11 11:11 AM

ASA and Russ Andrews again;!...
 
sipperty doodah

Dave,
You are side stepping and back tracking, the subject was
and is the spdif, and you are still very wrong!
With the 16 bit CD format (and other digital data) there
is no such thing as "bit-for-bit" transfer/copying, just not
possible! So with the Red book 16bit cd format from the very
start error correction was built in, at basic recording level
it is not "allowed" to encode four of those 16bits consecutively,
so as an example, a packet could NOT be: 1111111100000000
the simple reason was so that error correction could function
quite accurately at the decoding stage, the bits that were not
read correctly could accurately be guessed!.
Do a google on:
reed solomon cross-interleaved error correction
and
red book (cd format)
There will be zillions of results for you to read up on!

We were talking about the Sony/Philips digital interface (spdif),
then you yet again jumped in and got gobby and and revealed
how wrong you always are. And your codswallop about cat5.
But you spewed the old internet chessnut of digital transfer is
always perfect even though it had been pointed out it is not!
As always, only then do you do some homework.
As the thread evolves I can see you still jump in and get it wrong!
You write of some form of design to be possible to recover the
clock data, but how - If the spdif is an unclocked interface how
would such data be "recovered" if it isn't there?

Jitter.
I'm surprised at Jim and the issue of jitter. His boss PM developed
the JMS tool which has now long been adopted by the hardware
industry itself. Jim should have known this......
Jitter is a timing based error, and the spdif is one standard which
is very susceptable to it's effects.
For examples we will stick with the 16bit cd format but it applies
to DVD, PCM of all types etc.... and cat5.
The *internal* CD transport connection is a form of IDE interface
(do a google on "IDE interface") which IS clocked! We can see
how the hardware brigade got their act together and it evolved
to the point that *internal* 16bit digital transfer has jitter levels
of less than +/- 10psec, no discernable negative effect whatsoever!
But still error correction is required, there is no such thing as
"bit-4-bit" transfer!
But jitter and the spdif is another kettle of mackerals. Jitter is not
an audiable distortion as such but via spdif it's effects are very
discernable and established, and such negative effects are measureable.

The spdif is one very clumsy digital interface, if implementeed
correctly there is no discernable differences between original/source
and copy/dac etc. But then there is impedance....
The spec for the spdif is 75ohm, if the output, the cable and the input
meets that then dandy! But frequently transport to dac/dat/digital amp etc
are not a match (and the high end were the biggest criminals).
I'm no champion of snake oil and cables/interconnects are subject
to seriously dubious claims by their marketeers, but with the spdif it
is one interface where it is dependent on a matched interconnect.
And there is no need for silly priced "high-end" coax, if it is true
75ohm output and a true 75ohm input, then all that is required is
a true 75ohm well screened aerial coax. But...
Error correction (guesswork) is still needed as there is no such
thing as bit-4-bit digital transfer, not even by cat5...

Where do you live Dave, anywhere near London? Ever sat down
with a glass of Chilean merlot and listened to a bunch of real world
CD + DAT + DAC's connected via differing digital interconnects?
If there was such a thing as bit-4-bit digtal transfer then you could
explain why there is such a dramatic difference via the coax and toslink
outputs of one cd player into the same DAC or DAT?
Or the difference in sonics when same coax into same DAC/DAT
but differing transport?

Have to go, I'm late shift moving Londons commuters around.
Cheerio.

I'll be back in here on wednesday.






Arny Krueger January 17th 11 12:28 PM

ASA and Russ Andrews again;!...
 
"Don Pearce" wrote in message

On Sat, 15 Jan 2011 12:27:02 -0500, "Arny Krueger"
wrote:

Part of the problem is a lack of understanding and/or
agreement about how much jitter is too much.


Well, that should be easy enough. Too much is where there
is bit-ambiguity at the clocking point. To get that, you
need jitter of the order of half a bit. A stereo stream
of 192kS/sec and 24 bits has a half bit duration of 54
nanoseconds, so peak jitter should preferably be
comfortably less than that.


Trust me, that much jitter is audible, and clearly so if the DAC itself
cannot reject it.

Some years back I built a jitterizer, which allowed me to use an audio
signal to apply FM to a SP/DIF signal. The range of added jitter that I
could apply went from zero to enough jitter to cause either the two DACs I
had on hand to lose clocking and mute.

One DAC had just about no jitter rejection, and could be coaxed into
producing audio signals with clearly audible tremelo. Tremelo, as in
producing smooth transitions of a steady tone from one pitch to another, or
the equivalent with regular music as either source or modulator.

The other DAC produced output signals with all spurious responses about 120
dB down, whether I applied jitter or not. The only form of misbehavior was
that it ultimately unlocked which under the test conditons, was perfectly
acceptable.

For a CD the figure is more like a third of a microsecond. Utterly
trivial.


Clearly audible if the DAC can't reject it. Some can, some can't. BTW,
neither of the DACs I tested were particuarly high end. I believe that the
poorer one cost me more.



Don Pearce[_3_] January 17th 11 03:53 PM

ASA and Russ Andrews again;!...
 
On Mon, 17 Jan 2011 08:28:58 -0500, "Arny Krueger"
wrote:

"Don Pearce" wrote in message

On Sat, 15 Jan 2011 12:27:02 -0500, "Arny Krueger"
wrote:

Part of the problem is a lack of understanding and/or
agreement about how much jitter is too much.


Well, that should be easy enough. Too much is where there
is bit-ambiguity at the clocking point. To get that, you
need jitter of the order of half a bit. A stereo stream
of 192kS/sec and 24 bits has a half bit duration of 54
nanoseconds, so peak jitter should preferably be
comfortably less than that.


Trust me, that much jitter is audible, and clearly so if the DAC itself
cannot reject it.

Some years back I built a jitterizer, which allowed me to use an audio
signal to apply FM to a SP/DIF signal. The range of added jitter that I
could apply went from zero to enough jitter to cause either the two DACs I
had on hand to lose clocking and mute.

One DAC had just about no jitter rejection, and could be coaxed into
producing audio signals with clearly audible tremelo. Tremelo, as in
producing smooth transitions of a steady tone from one pitch to another, or
the equivalent with regular music as either source or modulator.

The other DAC produced output signals with all spurious responses about 120
dB down, whether I applied jitter or not. The only form of misbehavior was
that it ultimately unlocked which under the test conditons, was perfectly
acceptable.

For a CD the figure is more like a third of a microsecond. Utterly
trivial.


Clearly audible if the DAC can't reject it. Some can, some can't. BTW,
neither of the DACs I tested were particuarly high end. I believe that the
poorer one cost me more.


I wasn't talking about jitter presented to the DAC, but the ability to
decode accurately from a proper phase-locked clock. I have no doubt
that this degree of jitter would be audible if it found its way as far
as the DAC.

d

David Looser January 17th 11 03:57 PM

ASA and Russ Andrews again;!...
 
"Arny Krueger" wrote in message
...
"Don Pearce" wrote in message

On Sat, 15 Jan 2011 12:27:02 -0500, "Arny Krueger"
wrote:

Part of the problem is a lack of understanding and/or
agreement about how much jitter is too much.


Well, that should be easy enough. Too much is where there
is bit-ambiguity at the clocking point. To get that, you
need jitter of the order of half a bit. A stereo stream
of 192kS/sec and 24 bits has a half bit duration of 54
nanoseconds, so peak jitter should preferably be
comfortably less than that.


Trust me, that much jitter is audible, and clearly so if the DAC itself
cannot reject it.


I think you missed the point of Don's post. Don is talking about jitter of
sufficient amplitude as to make bit-errors likely, the implication being
that he, like me, assumes that any DAC worth it's salt will NOT feed jitter
from the SPDIF input into the clock of the DAC proper. This shouldn't be
hard, digital audio transmission has been around for half a century now, and
the problems that affect it and the techniques for dealing with those
problems are well understood.

David.



Jim Lesurf[_2_] January 17th 11 04:13 PM

ASA and Russ Andrews again;!...
 
In article , Fed Up Lurker
wrote:
sipperty doodah


Dave, You are side stepping and back tracking, the subject was and is
the spdif, and you are still very wrong! With the 16 bit CD format (and
other digital data) there is no such thing as "bit-for-bit"
transfer/copying, just not possible! So with the Red book 16bit cd
format from the very start error correction was built in, at basic
recording level it is not "allowed" to encode four of those 16bits
consecutively, so as an example, a packet could NOT be: 1111111100000000
the simple reason was so that error correction could function quite
accurately at the decoding stage, the bits that were not read correctly
could accurately be guessed!. Do a google on: reed solomon
cross-interleaved error correction and red book (cd format) There will
be zillions of results for you to read up on!


When reading about RS coding and the other methods used to carry audio data
on 'red book' CDs take care to distnguish between what Philips called
'channel bits' from 'audio bits'. Similarly with spdif note that the
ecoding modulation there isn't the same as either of them.

Channel bit streams do, indeed, have limits places on how long (or short) a
run of zeros you can have. This is to aid tracking, etc. But there are no
such limits on the audio bits or sample values. So any sequence of legal
16-bit *audio sample* values is legal at the audio bit/word level.

It is certainly possible to make bit-perfect copies of CD *audio* sample
data values. Something I've done repeatedly via SPDIF and checked by doing
a comparison of the result with what is on the CD.

We were talking about the Sony/Philips digital interface (spdif), then
you yet again jumped in and got gobby and and revealed how wrong you
always are. And your codswallop about cat5. But you spewed the old
internet chessnut of digital transfer is always perfect even though it
had been pointed out it is not!


I've certainly measured it being a bit-for-bit correct transfer on a number
of occasions via spdif.

However as Arny and others have pointed out, it is a different issue to
ensure that a *DAC* converts that 'perfectly' into an *analogue* output
pattern. One reason being - as discussed - that if the DAC can't deal well
with data timing problems then they phase/frequency modulate the intended
output and generate phase-noise sideband effects. Again, easy to measure
even when small enough to (in my experience) pass unnoticed by ear. :-)


Jitter. I'm surprised at Jim and the issue of jitter. His boss PM
developed the JMS tool which has now long been adopted by the hardware
industry itself. Jim should have known this......


You seem fond of assuming others don't know what they've actually known for
years. :-)

So far as I recall Julian Dunn was the main initiator of this topic. PM
then developed his own system for assessing it. But TBH doing this isn't
recket science if you understand phase modulation, etc.


The spdif is one very clumsy digital interface, if implementeed
correctly there is no discernable differences between original/source
and copy/dac etc.


I'd say it is actually quite a neat serial format as it embeds the clock in
quadrature and is essentially a simple form of differential byphase
modulation. The problem is that some DACs don't do as well as they should.
Seems to me like a design problem with the DACs not the data format. On
that basis I do agree with Don. Alas, in the real world some domestic DACs
may not work as well as they should. Hence it makes sense for someone like
PM to publish measurements to at least try and keep manufacturers honest.
:-)


But then there is impedance.... The spec for the spdif is 75ohm, if the
output, the cable and the input meets that then dandy! But frequently
transport to dac/dat/digital amp etc are not a match (and the high end
were the biggest criminals).


And in practice it is effectively impossible since none of the real cables
will maintain the same characteristic impedance right down to low
frequencies. Fortunately, as Don (IIRC) pointed out, that generally doesn't
matter as the connections in home use are usually short. I've cheerfully
used 50 Ohm coax at times - admittedly with Meridian DACs - and not had any
problems. Ditto for making up my own switchboxes with no attempt to make
them '75 Ohm'. Would not use such things when testing kit, though, as the
results might not be a fair representation of what they can do in use under
more appropriate conditions. :-)


Where do you live Dave, anywhere near London? Ever sat down with a glass
of Chilean merlot and listened to a bunch of real world CD + DAT + DAC's
connected via differing digital interconnects?


Erm.., my experience is that any amount of alchohol degrades people's
ability to hear reliably. They may *enjoy* listening more, but that is a
different matter, I think. 8-]

Slainte,

Jim

--
Please use the address on the audiomisc page if you wish to email me.
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html
Audio Misc http://www.audiomisc.co.uk/index.html


David Looser January 17th 11 06:54 PM

ASA and Russ Andrews again;!...
 
"Fed Up Lurker" wrote in message
...
sipperty doodah

Dave,
You are side stepping and back tracking, the subject was
and is the spdif, and you are still very wrong!
With the 16 bit CD format (and other digital data) there
is no such thing as "bit-for-bit" transfer/copying, just not
possible! So with the Red book 16bit cd format from the very
start error correction was built in, at basic recording level
it is not "allowed" to encode four of those 16bits consecutively,
so as an example, a packet could NOT be: 1111111100000000
the simple reason was so that error correction could function
quite accurately at the decoding stage, the bits that were not
read correctly could accurately be guessed!.
Do a google on:
reed solomon cross-interleaved error correction
and
red book (cd format)
There will be zillions of results for you to read up on!


All of that relates to the recording of data on the CD, NOT to SPDIF
transmission. As I said before SPDIF can be used for digital audio from any
source, it is not particularly related to CD sourced audio.

We were talking about the Sony/Philips digital interface (spdif),
then you yet again jumped in and got gobby and and revealed
how wrong you always are. And your codswallop about cat5.


It was a long way from being "codswallop". What you mean is that your own
knowledge is so limited that you do not understand that balanced 110ohm
transmission of digital audio over twisted-pairs is common in the
professional variant of SPDIF, known as AES/EBU. The only differences
between SPDIF and AES/EBU, BTW, are that the information contained within
the metadata is different and whilst the consumer varient specifies 75ohm
co-ax and toslink, the pro varient specifies 75ohm co-ax and 110ohm twisted
pair. But in all other respects the two are identical and can readily be
interconnected. It is not at all uncommon for semi-pro equipment to be
provided with connectors for all three types of cable, optical, co-ax and
twisted pair.

You write of some form of design to be possible to recover the
clock data, but how - If the spdif is an unclocked interface how
would such data be "recovered" if it isn't there?


That paragraph has really underlined just how poor you knowledge is. Clock
isn't "data", it's clock. And clock recovery is a normal (and necessary)
part of any device that receives digital data which is not accompanied by a
separate clock transmission path. Clock recovery can be done well, or badly.
It seems that most consumer SPDIF input audio DACs do it badly. That's a
shame but it doesn't mean it can't be done well.

Jitter.
I'm surprised at Jim and the issue of jitter. His boss PM developed
the JMS tool which has now long been adopted by the hardware
industry itself. Jim should have known this......
Jitter is a timing based error, and the spdif is one standard which
is very susceptable to it's effects.


In what respect do you think SPDIF any more susceptible to jitter than any
other digital trnsmission standard? Or do you simply mean that there are a
lot of sub-standard consumer audio DACs about?

For examples we will stick with the 16bit cd format but it applies
to DVD, PCM of all types etc.... and cat5.
The *internal* CD transport connection is a form of IDE interface


Some are, some aren't. But in any case it has precisely nothing to do with
jitter on SPDIF.

(do a google on "IDE interface") which IS clocked! We can see
how the hardware brigade got their act together and it evolved
to the point that *internal* 16bit digital transfer has jitter levels
of less than +/- 10psec, no discernable negative effect whatsoever!
But still error correction is required, there is no such thing as
"bit-4-bit" transfer!


The error correction in CD players is required because of the large number
of bit errors on CDs, now't to do with jitter.

But jitter and the spdif is another kettle of mackerals. Jitter is not
an audiable distortion as such but via spdif it's effects are very
discernable and established, and such negative effects are measureable.


As we keep telling you, that ain't necessarily so. It's perfectly possible
to design an audio DAC which is unaffected by jitter right up to the point
where is causes bit-ambiguity.

The spdif is one very clumsy digital interface,


Rubbish, it's an excellent interface. As I mentioned before it uses bi-phase
mark encoding to facilitate clock recovery regardless of the data patterns.
And as Don mentioned, over medium and long distances "clocked" (to use your
personal terminology) systems can perform worse in terms of jitter than
"unclocked" because of transit delay differences between the data and clock
channels.

if implementeed
correctly there is no discernable differences between original/source
and copy/dac etc.


As I've kept telling you!

But then there is impedance....


It's certainly true that a mismatch can cause an increase in jitter, but as
I keep having to tell you that does not necessarily have to mean any audible
or measurable effects on the analogue output. You would not expect it to
with pro gear.

Error correction (guesswork) is still needed as there is no such
thing as bit-4-bit digital transfer, not even by cat5...


Error correction is NOT "guesswork". Error correction uses redundancy in the
data to *correct* (as the name implies) the errors. You might be thinking of
error concealment.

Where do you live Dave, anywhere near London? Ever sat down
with a glass of Chilean merlot and listened to a bunch of real world
CD + DAT + DAC's connected via differing digital interconnects?


Over the years I've listened to very many real-world digital systems using a
wide variety of interconnects, though generally without the alcohol.

If there was such a thing as bit-4-bit digtal transfer then you could
explain why there is such a dramatic difference via the coax and toslink
outputs of one cd player into the same DAC or DAT?
Or the difference in sonics when same coax into same DAC/DAT
but differing transport?


We have long since got the answer to that one; poor clock
recovery in consumer-grade stand-alone DAC units.


There is a saying:- "a little knowledge is a dangerous thing". You have that
"little knowledge" plus the arrogance to think that makes you an "expert".
Well it doesn't, and your post has really pointed up just how weak your
understanding of the underlying issues are. If you really want to
understand the issues you'd do far better to read papers written by those
who work professionally in the area of digital transmission rather than HiFi
magazines, which are highly suspect as sources of technical information.
And I'd be careful about relying to much on Google as well if I were you.
Some material on the internet is excellent, but much is no better than you
find in the HiFi magazines.

David.







tony sayer January 18th 11 09:50 AM

ASA and Russ Andrews again;!...
 
But then there is impedance.... The spec for the spdif is 75ohm, if the
output, the cable and the input meets that then dandy! But frequently
transport to dac/dat/digital amp etc are not a match (and the high end
were the biggest criminals).


And in practice it is effectively impossible since none of the real cables
will maintain the same characteristic impedance right down to low
frequencies. Fortunately, as Don (IIRC) pointed out, that generally doesn't
matter as the connections in home use are usually short. I've cheerfully
used 50 Ohm coax at times - admittedly with Meridian DACs - and not had any
problems. Ditto for making up my own switchboxes with no attempt to make
them '75 Ohm'. Would not use such things when testing kit, though, as the
results might not be a fair representation of what they can do in use under
more appropriate conditions. :-)


Some time ago we had an urgent need to establish a high quality audio
circuit betwixt two points. Fortunately we had a line of sight path
between the Two locations over some 4 miles. We used an olde 1.395 Ghz
Video sender to carry AES/EBU digital via an impedance matching
transformer arrangement at each end. From the transformer there was at
the one end 200 odd meters of CT100 domestic TV coax and around 70
meters at the other before it hit the video sender/s

Much to most everyone's surprise this worked very well for over a week
which was the time the "circuit" was required for. Some people were
asked to asses the performance using known CD's at one end and all
concerned thought it was fine 'n dandy with no noticeable
degradation;!...



--
Tony Sayer


Arny Krueger January 18th 11 11:56 AM

ASA and Russ Andrews again;!...
 
"David Looser" wrote in
message

In what respect do you think SPDIF any more susceptible
to jitter than any other digital trnsmission standard? Or
do you simply mean that there are a lot of sub-standard
consumer audio DACs about?


SPDIF over coax is an absolutely lovely digital audio signal transmission
media compared to say over-the-air HDTV.

If SPDIF is as unmanagable as some audiophiles seem to think, OTA HDTV would
be completely unlistenable much of the time, what with multipath and
reflections off of moving reflective objects in the signal path like trees,
etc.

Many audiophiles seem to think that its still 1970.




All times are GMT. The time now is 09:09 PM.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
SEO by vBSEO 3.0.0
Copyright ©2004-2006 AudioBanter.co.uk