The whole reason for this disc read error discussion is that it was raised by Julian - with similar additional input from Lowrider - and it's taken 3 pages or so to get that point through to him.
And the whole point of the Interleaving in the CIRC is to disperse the correlated burst errors generated by a scratch (in that paper I linked, up to 2.4mm) to enable enought local information for accurate correction.
I thoroughly agree that we need to stop talking about bit errors - but while people keep on making false assertions they need correcting.
As I'm equally happy for any falsehoods I perpetrate to be corrected - here's an opportunity to have a go. As I've said many times, the only differences can lie in the propagation of timing errors TO THE DAC CLOCK. In essence, word timing errors cause convolution of the desired spectrum with a distortion transfer function, which is essentially the spectrum of the jitter (and Bessel functions get involved somewhere). Peak timing error is largely irrelevant in itself - the important thing are the peaks in the jitter spectrum. There are differences in implementations of extracting the clock for the DAC from the input data stream: some use all the bit transitions; better ones use just the frame sync bits; even better ones actually slave the transport output clock to an independent clock suplied close to the DAC, and so don't use any clock extracted from the data stream. I would love more information on the degree and nature of their effects in practice.