Reply to thread

No, CDR's and CD's are produced by very different processes and the distribution of errors cannot be assumed to be the same even though the protection is the same. This "nit-picking" comes into play because CDR writing (at home) isn't nearly as repeatable as the stamping used for CD production.




The rough summary of the papers is this:


Adams: Jitter is a problem that has been widely mis-represented in the popular audio press (a dig at Stereophile, who made jitter their hobbyhorse); jitter cannot simpy be measured by hooking up a scope to the clock input on a DAC - it's way more complicated than that because some jitter immunity can be in the DAC itself; even if you somehow knew the jitter then that doesn't translate directly into an effect on the ouput, but affects different dacs in different ways (resistive ladder is different to mash is different to delta sigma).


Dunn: Even with worst case scenarios, the jitter is still 40dB below the signal, and is covered by auditory masking (and that was using DAC etc technology available before 1992..)


Benjamin and Gannon: "for nearly all program material no audible degradation was heard for any amount of jitter added below the level at which the DIR (Digital Interface Receiver) lost lock". [Note, they used several different DAC's, and this comment is important as it relates to the initial question about cables and transports..]





The reasons I wrote "Groan" and then corrected you with "distribution of timing errors"

is because you seemed to want to ignore at least part of the distribution when YOU wrote this Peak timing error is largely irrelevant in itself !!!!


Back
Top