how can digital be different

Originally posted by julian2002
Uncle ants,
simple answer is cost. also the 'tweakier' the 'solution' the more of us gullable fools will be attracted to it. i believe meridian does use a buffer system similar to this and of course there is the chord dac 64 which also uses a buffer memory (although not for this purpose) so there are some manufacturers out there doing this.

Hmm, as Mo says, wouldn't have thought it'd be pricey. After all they build them into 50 quid portables. Just strikes me that if this is really an issue then spending a few quid per unit on such a system might be well worth it.
 
Originally posted by penance
I imagine the cost would be that of adding a buffer chip, in the world of makeing money that would be considered to much

Some manufacturers spend loads just making there kit look as beefy as possible. Would have thought this would be a drop in ocean in comparison (and who knows might even be more effective) - that is assuming its really an issue of course :)
 
there may also be mechanical considerations too. if you are spinning the disc faster and possibly intermittently the lifespan of the drive would be shorter, not good in a multi '000 pound cd player. also you've got noise considerations too - cd-roms aren;t the most silent transports ever are they. also the addition of buffer memory may have adverse effects on jitter performance. not an issue on a 50 quid portable but on a costly player if it's got high jitter it MUST be crap (if there are any americans reading that was sarcasm btw ;) ).
other than that i couldn;t guess.
cheers


julian
 
Originally posted by julian2002
if an error occurs then the error correction kicks in and the chips make a 'best guess' as to what the corrupt data should be. this guess however is not the origonal data and so distortion occurs.

Not true AFAIK - the error correction is sufficiently robust that the data is always recovered bit-perfect. The question of error correction is largely irrelevant to the transport argument which concerns jitter, ie. timing errors. If there is an error in reading the data, there's a loud click or pop of some sort or the CD skips or repeats, but this actually constitutes an improvement in the sound quality as it makes it more vinyl-like :D
 
Originally posted by wadia-miester
this is going to rumble on :D me I prefer a sanyo cd drive & russel hobbs leads trashes my wadias big style :(

Or a DVD player, the image will distract you from all those problems... :guiness:
 
If transports are so perfect how come things like the green pen make such a difference?
If the green pen makes such a difference, how come you cannot prove it?

Seriously, we can repeatedly rip a CD on a variety of CDRs and get the same results each time. And this can happen at greater than real time speeds. So there's no intrinsic problem feeding a DAC with accurate data. 'jitter' then becomes totally the responsibility of the DAC.

If transports sound different I think you have to question the engineering ability of the transport and DAC makers. And why people are joining together components in a way that makes trouble for themselves and increases expense...

Paul
 
.............which is why I now have a one box player.

Funnily enough, the green pen works with that to. Does that mean my one box player is crap too?
 
urghhhhhhh pplllleeeeeeeeeaaaaaaaaasssssseeeeeee not another green pen debate

:banghead:


I'd even rather talk about Mana :(
 
peteh,
if what you say were true there would be no need for drives to re-read data which has been incorrectly read. i can assure you that most 'drives' be they cd-rom or hard do indeed have to re-read data when incorrectly read at some point or another.
i'm not sure of the exact specifications of the rs coder used for cd-a but as a system rs is better at correcting 'burst' errors (errors which occurr one after the other) than randomly distributed errors. obviously this will deal with physical defects on the disk better than random electrical noise induced data errors.
an example would be if you were using an rs system where for every 255 bits of data you used 32 bits as parity information, and had 8 bit words then you'd be able to correct up to 16 individual errors so one 50 bit 'burst' of errors would be fine but if those 50 incorrect bits were randomly distributed through the 232 bits of real data then there would likely need to be a re-read. also if the errors occur during the reading of the parity bits then you also run into problems.
another thing is that you've got to remember that cd's are 20+ years old. the amount of processor power necessary to impliment rs decoding is not trivial so red book probably uses a low number of parity bits and so won;t be able to rescue many 'words'.... i'm winging it now so i'll shut up.
cheers


julian
 
even linear tape drives re-read. Imagine how much more work is involved in that. Infact they can make a fair amount of re-read before a fail condition. Even though i dont work on optical or cylinder drives, i imagine, just from the mechanics of them, it is a hell of a lot easier to re-read on that media
 
Originally posted by julian2002
.... i'm winging it now so i'll shut up.

Me too :D I don't believe it to be the case though that the error correction starts making approximations to the original data if it can't quite retrieve all the data - you can't make approximations in the digital domain. Either the data is read bit-perfect and your CD plays, or it's not and you typically get loud pops or cracks - the whole point of the digital storage is that there's nothing in between. As I say though I'm no expert so I'm quite prepared to be proven wrong.
 
Julian, you are right in part...but wrong in as much. If the bursts you talk about are short enough then you are right about being able to correct more bits that way, but you are more likely to be able to correct a given (low) error incidence if it's spread randomly across multiple frames than if they are all localised within a single frame - hence the levels of interleaving put into the system (i.e. deliberately scattering burst errors across many different data words). If anyone wants to find out about how the error correction is done etc people should read something like this well-written article. And as adb has said repeatedly in the past, the error-correction doesn't "kick in" and "work harder", resulting in some analoguy sag or distortion - if works at the same rate continuously while the player is playing.

I would be very surprised if audio players do re-read data - misreads IME just result in a click. Re-reading would also not help with damaged media, only external influences (as in e.g. anti-jog)


Originally posted by Lowrider
For instance...:confused:
OK..just a couple of examples
For one of these two waveform dimensions, the vertical amplitude axis, the CD contains some information, but the data coming from the CD are emphatically not fully or adequately descriptive of the music waveform's ever changing amplitude, especially for musical frequencies above 2 kHz or so. Instead, these data coming from the CD are only sketchy clues about the music waveform's ever changing values along the vertical amplitude axis.
1) These "sketchy clues" are the data bits being read off the disc which are are read off essentially perfectly in all but the worst cases. While CD-ROMs in your PC have a little more circuitry to do an even better job, think what would happen to your PC if if couldn't. The whole point of digital transmission is that the exact value of the signal amplitude has NO SIGNIFICANCE. The only important thing is whether it is greater or lower than a certain value - if it exceeds it then there it is IRRELEVANT by how much. 2) That figure of 2kHz is complete crap - maybe they meant 20kHz. 44.1 kHz can distinguish fully (to a resolution dependent on the bitdepth of the digitisation) any frequency below 22.05kHz.
But the other half of that same music waveform, its horizontal time axis, is not encoded digitally onto the recording, neither in full nor as sketchy clues. And it is not even encoded onto the recording in analog. Actually, it is not encoded onto the recording at all! Instead, it is merely assumed.
This assumption is what's called a standard! If the data is not stored at 44.1kHz then all bets are off. But this is not a difficult standard to meet - crystals are avaiable for less than a couple of quid that provide standard frequencies to better than 1 part in 10e7. That's many orders of magnitude better than you'll get off the best TT. (And to accommodate drift or physical tolerances on the disc it appears there's 3 sync bytes in every 36 on the disc)

With this amount of misinformation, distortion and obfuscation how can we work out what is useful information (which it looks as if there may be scattered amongst the garbage) and what is more crap?
 
I'm not getting involved in the transport debate...

But If you think it's all just a matter of 1s and 0s then I suggest you get your mate to look at exactly how the receiving circuitry in a DAC decides whether the signal comming from the cable is a 1 or 0 (GrahamN as already given the answer).. then get him to research into the affect of Capacitance on a squarewave signal.. and he should (if he has any capacity to reason) realise how the cable can corrupt the signal in such a way as to induce errors (both bit and timing) in the input circuitry of the DAC. Once these errors have occurred.. they CAN NOT be rectified. The DAC has NO cirtuitry to enable it to do so.. it also has NO IDEA of what the signal SHOULD look like.

GTM
 

Latest posts

Back
Top