Power Cable Test – Volunteers Needed

I dont think is anyone is being critical - we are all impressed and grateful and just want to help make suggestions about making the tests more conclusive.
 
Statistically meaningless. Sadly that has to be the one and only conclusion. I'm delighted however that these people seemed to enjoy themselves blind testing mains cables. Good for them!
 
I really do think that tests on the effects of mains cables (and other cables) are worthwhile, and I regret that us clever clogs didn't engage brain on methodology before they did those tests.

measurement: each judge to assess (& score) 'difference' between two given sessions for each 'product pair' wrt sound quality, as apparent or not apparent (1, 0), and if apparent, say whether they liked, disliked or were indifferent (1, -1, 0) to that difference.

Create a data record for each judge (j=1, J) and 'product pair' (k=1, K).

Need to derive a test statistic, or two, or three ...

could use the A-A control differences as estimate of error variance for the test statistic. Or would repeated measurement of same 'product pair' be better approach, relegating the A-A controls as just measure of something else?
 
That is what I tried to suggest to Mosfet even before the test. However, he think it would be too difficult to explain to listeners how to do this but I never understand why that is should be the case.

I think this would have placed too much on the testers WG. If for instance they were asked to record in words what they were hearing. I was keen the concentration should remain on the salient points of recording difference and improvement.

The best summation of possible conclusions I've seen (better than I could manage) so far is by 'Meninblack'. Posted here for those who haven't read.


1. The cables made essentially no difference.
Fits the facts. The only counter-evidence is Mr. C's assertion that differences were plain to him. Tony, were you sighted or unsighted of the cables? If unsighted, did you correctly identify the kettle lead every time?

2. The cables made a difference, but it was so slight that the resolving power of the experiment was inadequate.
Possible, but with three blind testers and only one variable each time this would imply that the differences were very slight indeed. Very, very slight in fact.

3. The results were somehow fiddled by the panel or the experimenter.
Paranoid nonsense!

4. The panel's hearing is inadequate. The differences are like night and day, but only to the platinum-eared elect.
What, we've randomly ended up with three keen audiophiles and cable believers who just happen to be deaf? Unlikely.

5. Some other unidentified factor prevented the differences from being heard.
Always a good get-out! Anyway, Tony heard them. Why not the other three.

http://www.hifiwigwam.com/forum1/1104-2.html
 
3DSonics wrote:
Thank you for your smugness. [edited by Dev] :mad:

First, may I ask if any controls where done to validate the test-setup and participating listeners agains "known audible" stimuli?
No. The testers were not required to pass golden ears tests parts one thru seven nor record their responses to the tritone paradox. It was assumed they were of normal and representative hearing ability.

The responsibility ââ'¬Å"to validate the test-setupââ'¬Â, if by this you mean to competently set-up the test system, was with WM, the host.

as no effort was made to illustrate what could be actually identified relaiably under the test conditions.
Please read the method at least. Efforts were made within the limitations of what could be achieved.

Past that, once we have evened out the chances of type a and type b statistical errors we find that we have insufficient data to draw any conclusion with any reasonable certainty.
And the ââ'¬Å"reasonable certaintyââ'¬Â of sticking felt dots around the room to influence the room acoustic is again?


'I deliberately did not record who made which observations because I was anxious that this should be a test of cables, not of people'. Shame, as this reduces the power of the test, and so the number of judges (n) would likely have to be larger than 3.
I can supply this information to you if you wish ditton. It would be necessary to maintain anonymity of who thought what specifically (this is possible) because this was an assurance given.
 
Last edited by a moderator:
Maybe I'm just miss reading the comments...........

But as someone else said ....statistically too small a sample and in that context the results are not meaningless, just to small to raise themselves above the 'random noise'

I like Paretto analysis.....for off the wall subjects. But its been 10years and a bad road accident since I used that methodology for any practical purpose. I'll have to have a think as to the effectiveness of it ......that is unless anyone can help ?
 
zanash said:
but I still think there is in suficient data in the test.
There isn't. Statistical analysis, whichever methodology you choose, is all about showing that the results obtained were very unlikely (usually less then 5% chance is the chosen breakpoint) to have occured by pure chance alone.

In this case there is no where near enough data to be able to show that. If instead of making a conscious choice in each test the listeners had just tossed a coin to decide their answer it's quite likely that similar results would have been obtained. On that basis the results are meaningless.

mosfet said:
it was assumed they were of normal and representative hearing ability.
If you had shown that the listeners were able to distinguish a "known audible" difference without fail you would not only have validated the listeners hearing ability but also the ability of the system (inc. room etc) to be able to resolve such small (but audible) differences. Since you think it's OK to assume that was the case it's quite reasonable for anyone to assume it wasn't. The more assumptions you can eliminate in any experiment the more indisuptable the results.

Michael.
 
Well seems to me that the evidence from this paticular test showed that there are no differences between stock and normal power cables.

7 "that's better" scores for the custom cables
6 "that's better" scores for the kettle lead.

This is as close to 50% as you'll get. Now if everyone listenning believed that power cables make a difference they are likely to "hear" a difference and thus score something. I am aware of the complications of any form of testing.

Until someone (and this includes ANYONE!) can blind listenning test a mains cable and get it correct over 70% of the time I will not be buying any more.

From this I still believe power cables make no difference. Oh and I own a few too.

I did this test with the green edged CDs, I got two of the same CD single and tried it time after time (blind) and was consitently unable to guess the difference (although I also swore I heard one).

Thanks for doing the tests though! Very useful.

Later, Tim
 
First, I am not bothered about anonymity. Publish if you want to Paul!

(But then I was the one who spotted the 'control' test :D ).

Nice to know I'm not hearing things.

I didn't always like what the aftermarket cable did and in most cases preferred the stock cable. What this says to me is that different cables have different sonic characteristics and that if you want to use one to improve your system, you should demo several, preferably at home, with a wide selection of musical styles.

I think it would be interesting to analyse the musical preferences of the judges, especially whether they err towards flat-earth or round-earth presentation. Some of the cables were distinctly flat-earth whilst at least one of them was round-earth in spades.

I think it would have been useful to repeat the 5 tests with the cables reversed (so A-B becomes B-A). The music for each pair would have to be different and the judges would have to be unaware that this was being done. It would show whether the judges were consistent in their preference. I do worry though that the choice of music will influence the outcome.
 
If instead of making a conscious choice in each test the listeners had just tossed a coin to decide their answer it's quite likely that similar results would have been obtained. On that basis the results are meaningless.
Surely the results show that the listeners 'conscious choice' was the same as tossing a coin, and therefore the cables didn't make the system sound different. Which seems a meaningful (and expected...) result.

Paul
 
You're right Paul. Since the results were, as Tim F says, as close to 50% as you'll get the test does show that the cables made no difference. In that sense it's not meaningless at all.

Michael.
 
So 13 out of 15 times an audible difference was noted. But of course you can just wave that away I guess.
 
Since you think it's OK to assume that was the case it's quite reasonable for anyone to assume it wasn't. The more assumptions you can eliminate in any experiment the more indisuptable the results.

Absolutely Michael. Assumptions are best avoided.

Placing the additional stipulation that testers would need to demonstrate they were of a minimum hearing acuity would have made this test more unattractive to potential volunteers.

Thus an assumption was made:

Are the testers of normal hearing acuity? I thought it more reasonable to assume they were than the possibility they were not based on the low statistical incidence of hearing impairment within the UK population against those of normal hearing acuity.

Suggesting the results are ââ'¬Å"of no consequenceââ'¬Â (3Dsonics) because the testers may have been hearing impaired or of insufficient hearing acuity is possibly the weakest argument I've seen so far. The testers were representative (that was the whole point).
 
So 13 out of 15 times an audible difference was noted. But of course you can just wave that away I guess.

Audiophiles (for want of a much better word) are inclined to hear differences irrespective. The second question in the method (and the results thereof) is relevant here.
 
So 13 out of 15 times an audible difference was noted. But of course you can just wave that away I guess.
The testers were deliberately predisposed to hear differences. The failure to spot the control by two out of three is telling. It would have been interesting to see more runs of the control test.

Paul
 
Paul Ranson said:
The testers were deliberately predisposed to hear differences. The failure to spot the control by two out of three is telling. It would have been interesting to see more runs of the control test.

Paul

And cables are notoriously system dependent, meaning that the equality in expressed preferences is also of no meaningful use whatsoever.

As I said, the test results are sadly of no use at all, other than being able to stoke up an arguement!
 
And cables are notoriously system dependent

Based on what evidence? Lots of people saying so? Guff written in hi-fi magazines?

Possibly correct, possibly not, until looked at empirically under suitable testing conditions.

As I said, the test results are sadly of no use at all

Your conclusion SM, as valid as anyone else's. :)
 
Back
Top