Why isn't it done? It seems a simple enough exercise to build a profile of a drivers non-linear distortion and then apply pre-distortion to reduce it. Building DSP algorithms is beyond my ability but I'm sure some of the big companies could do it. As an example, if you play a 1KHz test tone through a midrange driver and find that it produces harmonics at 2KHz - 45dB, 3KHz -49dB 4KHz -62dB etc... then you could simply add these same harmonics out of phase to the input signal. Run a full spectrum sweep at multiple levels and you can build a profile of how the driver behaves. Even a naturally poor driver could be made to perform very well... couldn't it?