PeteH
Natural Blue
Originally posted by wadia-miester
But so far in this thread a the conclution seems to be that read bit errors are almost improbable?
Certainly improbable, almost impossible


Originally posted by wadia-miester
But so far in this thread a the conclution seems to be that read bit errors are almost improbable?
Originally posted by PeteH
In the absolute worst-case example in the link you posted there are 30 uncorrectable errors, which AFAICT would correspond to something in the region of 600 microseconds of interpolation being necessary if the CD is to continue playing.
The whole reason for this disc read error discussion is that it was raised by Julian - with similar additional input from Lowrider - and it's taken 3 pages or so to get that point through to him.Originally posted by dat19
I think everyone agrees that disc read errors are not a significant problem in the sense that the data is correctly recovered (after error correction etc) from the disc.
I thoroughly agree that we need to stop talking about bit errors - but while people keep on making false assertions they need correcting.you need to stop thrashing on the "non-existant" disc read problem, and look elsewhere for how the differences might arise...
Originally posted by dat19
PeteH and GrahamN,
My comment was in response to this (incorrect) piece of analysis:
Originally posted by dat19
HOWEVER, if people do want to go back and forth on the problems caused by read errors from the disc, and whether these can bubble up to the interpolation layer, then discussing average error rates isn't going to get you very far, especially when for example surface scratches are the source of correlated errors, which you are currently treating as "random" processes.
Originally posted by dat19
the "non-existant" disc read problem
From the information linked here so far:Originally posted by wadia-miester
can we just clarify this please.
Originally posted by wadia-miester
so if we take ANY 25 transports mechanisms, and give them the same feed power etc, and the same output stages dac's etc, by YOUR reckoning they'll all be indentical then in sound and measurement, or I'm just playing at being thick for the benefit of the tape my'lud ?
Originally posted by wadia-miester
But the answer is not what was asked![]()
Originally posted by GrahamN
The whole reason for this disc read error discussion is that it was raised by Julian - with similar additional input from Lowrider - and it's taken 3 pages or so to get that point through to him.
And the whole point of the Interleaving in the CIRC is to disperse the correlated burst errors generated by a scratch (in that paper I linked, up to 2.4mm) to enable enought local information for accurate correction.
As I'm equally happy for any falsehoods I perpetrate to be corrected - here's an opportunity to have a go. As I've said many times, the only differences can lie in the propagation of timing errors TO THE DAC CLOCK. In essence, word timing errors cause convolution of the desired spectrum with a distortion transfer function, which is essentially the spectrum of the jitter (and Bessel functions get involved somewhere).
Peak timing error is largely irrelevant in itself - the important thing are the peaks in the jitter spectrum.
Agreed CDR and CD are different, but the same result (although with little detail on how extensive the test) was found for CD by Altman. BTW: The main point of the link I posted was to explain CIRC in readily understandable terms (because the concepts are clearly not understood here). I guess there is a high probability it is an undergraduate exercise, as it's from a US miltary siteOriginally posted by dat19
Well, upto this point very little evidence of low error rates are has been properly presented....but again provides no insight into the number of errors that happen during real playback.
Absolutely no disagreement here whatsoever.The first step here is to characterize the jitter and then you need to show that jitter shows up in the audio output of the DAC and that it isn't perceptually masked by the signal (ie, that it can be heard.)
Groan...The distribution of the timing error and the time scale of the error is very significant, and impacts different DAC architectures in very different ways.
As Pete said...but IF the data stream is presented to the DAC with the same timing and same interfering noise, then I see no way it can sound different. That is a huge IF though, and I have no evidence to the side-effects of different mechanics on the output devices. There may be many effects that do get through to the output too, but have no effect on the sound.Originally posted by wadia-miester
Pete, the questions wasn't transports (in the whole part) I mean the mech's and data transfer before it gets to the messy bit, as if the answer is no, then this opens up a whole new pandoras box oh the joys of theories![]()
Originally posted by GrahamN
As Pete said...but IF the data stream is presented to the DAC with the same timing and same interfering noise, then I see no way it can sound different. That is a huge IF though, and I have no evidence to the side-effects of different mechanics on the output devices. There may be many effects that do get through to the output too, but have no effect on the sound.
(And now the discussion has move on to the more interesting aspects of characteristing the timing errors...don't bother coming back with simplistic statements of peak jitter values - I'm sure they make sense in marketing literature, but doubt they're of tremendous use elsewhere)
Well you shouldn't have been surprised as it's not anything I've not said many times before.Originally posted by wadia-miester
Graham, as expected on point one thanks![]()
Well sticking with the digital->DAC bit for the moment (which is what this thread is about after all)...that's certainly one approach. Seems much better to me to make the DAC clock insensitive to any of the upstream interference. Obvious approaches are either:Simple enough theroy really, totaly isolate the incoming data stream from ALL FORMS of interfernce, until it reaches the output phono's on the cdp/dac, then just carry that on down the rest of the chain until it comes out the cones/panaels or horns![]()
If you can do that, I'll buy into the company![]()
Nothing I have any experience of...so that's for others to say. Maybe it's in those papers dat19 refererred to...so it would be nice to see what they say. (There's also some simplified stuff..not sure how helpful...in that Altman link, although I've not read it all.Seriously for a moment now, where do feel the main contamination paths for the data stream lie ?
Originally posted by dat19
Also, the "paper" you provided (which looks like a link to an undergraduate assignment) discusses the robustness of the code, but again provides no insight into the number of errors that happen during real playback.
Originally posted by dat19
The distribution of the timing error and the time scale of the error is very significant
Don't confuse CD-ROM with CD-R. An Audio CD written on CD-R media is still an Audio CD, not a CD-ROM. So it will still only contain the same amount of redundancy as a manufactured CD.Originally posted by GrahamN
Pete, don't forget that CD-ROMs have a second level of error correction above the C1/C2 parts of the basic CIRC
Originally posted by GrahamN
Agreed CDR and CD are different, but the same result
Since you clearly know more than you're posting..then POST IT! Also, simply quoting references to papers that are not generally available to people here is not tremendously helpful - any links or summaries would be most welcome.
Groan...
"distribution of the timing error and the time scale" = "jitter spectrum".