Lies, Damn Lies and Statistics or DBT's
Folks,
Has it occurred to you they might be imagining it? Millions of people claim homeopathy and reading tea leaves work. DBTs work in every other scientific field - its only mumbo jumbo area where scientific method is in contention - for obvious reasons.
DBT's work in audio too, not wanting to rehash the previous discussions, but most supposed DBT's in Audio are neither sufficient in sample size nor rigerous enough in implementation (including guarding against personal bias) to offer any statistical power or any relevance.
Further, given that they are generally produced with great theatrics by self proclaimed "debunkers", who, even after repeated, rigerous and purely scientific criticism of their methodes (including published papers in peer reviewed publications) do not change their methodolgy their general applicability and indeed their accuracy must be at least considered suspect.
I remember one DB Test of speaker cables published in JAES in which the authors used a fairly small sample size and set an apropriate level of significance (.2 in their case) and came to the conclusion that with an 80% certainty in their test their test subjects could distinguish between cables.
They Nay sayers criticised their choice of significance level and instead insisted of applying a wholly inappropriate (in view of the sample size) significance level of first .1 (at which the rerun test still produced a positive) and finally .05 where the test with a similar small sample size failed to show a difference.
To put these tests into a plainer language, three tests where run, of these two showed that with respectively 80% or 90% certainty the "cables sound difference" conclusion was not due to randomness (I'd call that bloody good odds), while a third test showed that the same conclusion could not be supported with a 95% certainty.
HOWEVER, calling in from the leftfield, all tests combined together actually would give 95% certainty or a "cables sound different" conclusion to a .05 level of significance.
The people who had been conducting the tests (I may be off base, but I seem to remember, possibly wrongly, Kinoshita San's involvement) simply gave up shouting in a desert where no-one wants to hear as a bad job and the Nay sayers declared a victory.
Meanwhile I had the job recently to compare two capacitors from different manufacturers of practically identical construction (same materials, including film thickness), of very close to absolutely identical values. Surprise suprise, despite being identical to all measurements both turned out to show under blind conditions (not stringent DBT though) a small, but clearly identifiable difference, WITH SOME recordings.
The recordings that showed the difference most where completely minimal "direct" recordings done to extreme degrees of purity and attention to details. While recordings that showed not discernable difference under blind conditions where modern style, processed and produced recordings, including surprisingly some "audiophile" ones.
This also produces an interesting corollary to the "supposedly identical and interchangable capacitors sound different" conclusion, namely that many modern recording practices and much modern recording gear introduces a much greater level of whatever distortion or alteration of the signal was caused by the capacitors so that capacitor differences are obscured.
Woe to any DBT using such recordings as source....
Ciao T
PS, controversial results of DBT's are not only common in audio, in other areas the same happens, except few audiophiles are exposed to the controversy. DBT's are neither good or bad or useful or useless in themselves, they are a tool. The results depend on the use of the tool by the experimenter, the experimenters intentions, experience and degree of care in use, plus they have inherent and implicit limitations that are rarely mentioned, as they are taken as "read" by those "skilled in the art".