Technical reviews of AV receivers
Wonderfully twisted logic there: all sorts of problems with blind testing, which only goes to prove that there are no differences, those who claim them are delusional and blind testing is the only true way.
No point in discussing further, really.
Audio Editor, Gramophone
- Login or register to post comments
- Flag as offensive
As opposed to this logic?:
Blind (read scientific) testing fails to confirm my subjective impressions, therefore blind testing must be faulty.
I agree, there's not much point discussing further.
Robin
- Login or register to post comments
- Flag as offensive
NZGUY.
Thanks for the comment on the Sony Blu Ray/SACD player.
- Login or register to post comments
- Flag as offensive
Blind (read scientific) testing fails to confirm my subjective impressions, therefore blind testing must be faulty.
Again, you are twisting what I have said to your own meaning. I think my involvement in this thread really has run its course.
Audio Editor, Gramophone
- Login or register to post comments
- Flag as offensive
If there has never been a blind tasting test of Cheddar and Blue Stilton, how more likely is it that there is in fact no difference between them? You claim there is a difference, but have you, yourself, undertaken a blind tasting test? How do you know that you are not deluding yourself into believing there is a difference just because everyone says there is? Millions have a preference for one or the other but you set up a blind tasting test and some cannot distinguish a difference. Do you, because you think you know there is a difference, a) assume this casts doubt on your assumption, or b) seek reasons for the aberrant result? Is the likelihood of there being a difference dependent upon the percentage being able or unable to detect it? Or are people choosing one or the other because they like one or the other. Is their value judgment a subject for your value judgment? Or none of your damned business?
There either is or is not a difference between Cheddar and Blue Stilton. There either is or is not a difference between Tesco's off the shelf digital streamer costing tuppence, and Linn's £11k Klimax DS unit or Naim's equivalent. How more or less likely is that difference to be a reality with or without a blind test? With a blind test, human nature being what it is, some will be unable to distinguish between cheeses and hi-fi units. How many does it take to make a claim scientifically valid one way or the other?
Make the differences between the cheeses less, between the digital streamers less, and the percentage of those unable to distinguish a difference increases. Is the validity of the result increasing or decreasing because the differences are diminishing? Or are other factors (strength of taste buds; hearing capacity) becoming an ever increasing factor? All the factors Andrew lists that have happened in blind tests happen because of human nature but they are not universally replicated in every situation where judgments are made, just some of them.
It is fatuous to argue that every single customer or reviewer is deluded because some are; that every engineer, manufacturer or product is of equal quality because some are made and sold on dubious grounds, hyped and sought as status symbols, or whatever. I think we need a little more faith in individuals' judgment, integrity and ability to discriminate.
Arguments against this use the innuendo and tactics of climate change deniers and their ilk. Cast doubt on the science and technology involved, assume conspiracy and call for ever outlandish "proof" in defiance of evident experience. What lies behind this presumption of universal delusion is the belief that most are not as clever or discriminating as you.
Believe what you like. I'm enjoying the music.
Vic.
- Login or register to post comments
- Flag as offensive
I don't claim that blind testing is required for everything. I'd be very surprised if many people couldn't tell the difference between Cheddar and Blue Stilton in a blind test.
If lots of blind Cheddar vs Blue Stilton tests had been carried out and had returned negative results, then I might indeed start to wonder about cheeses in general.
The point about streamers, DACs & amps is that many blind ABX tests have been done on these types of devices, and have failed to detect the differences which seemed obvious in sighted comparisons of those same devices.
Amps are slightly more tricky in that there are issues over loudspeaker
loads and clipping characteristics etc., and some well trained people
may be able to pick out subtle differences, but I think the general
point stands.
Anyway, I've probably said as much as I think I need to on this subject.
Keep enjoying the music.
Robin
- Login or register to post comments
- Flag as offensive
The point about streamers, DACs & amps is that many blind ABX tests have been done on these types of devices, and have failed to detect the differences which seemed obvious in sighted comparisons of those same devices.
...
Anyway, I've probably said as much as I think I need to on this subject.
...
Phileas,
Revisiting this thread I notice I misread your comment on your having said as much as you need to on this subject, as having said as much as you want to say. (If I haven't misinterpreted you intention please ignore the following, which, perhaps, others might want to pick up.)
The point of the cheese analogy was a rather fatuous attempt to establish that the absence of testing in itself has no bearing on the fact of the existence of differences.
The question of the existence of differences which I thought you were making was not between similar products but between all of them. That is, that digitising sound eliminates differences and cannot be distinguished in products across the board.
You state above that "many blind ABX tests have been done on these types of devices and have failed to detect [ ] differences".
Two point here. I suspect that the blind tests were between units of similar quality between manufacturers, ie, Naim's and Linn's middle-range streamers, etc. That is, where differences would be very small indeed and heard only by experienced listeners to such esoteric products, not a man or woman off the street, for instance. How many blind tests would have been done between a cheap off-the-shelf product and a top or the range unit?
Secondly, the statistical results of any blind test are vital to any conclusion drawn from it. A constant, repeated and overwhelming result suggesting no better than a significance from random guesswork would be conclusive. You are not claiming that such test produce results like this are you? What happens is more subtle than that. Some detect a difference even between units of very similar levels of technology, etc. Most members of the general public who don't listen to good quality equipment would not detect a difference that companies spend fortunes on research and development to achieve, and which is important to only a very small number of audio enthusiasts. It is their experience that is the subject here, not members of the general public who might listen on transistor radios or whatever.
My point is that it is the engineers in competing companies, their R&D, experienced reviewers like our esteemed AE, discriminating enthusiast who spend money in a highly competitive market in the search for "the closest approach to the original sound" - it is those people who detect the differences.
To claim that all digital products sound the same - even between the cheapest in one range and the most expensive in another - flies in the face of this experience. Blind tests between products with perhaps minute, but to some, vital differences, which produce inconclusive results (sometimes) is no evidence at all that a £100 off-the-shelf digital streamer sounds the same as a top of the range one from a highly reputable company in this field, is It?
Vic.
- Login or register to post comments
- Flag as offensive
Vic
I just didn't want to prolong a discussion which covers familiar ground.
I will respond to your post, just to clarify my position.
If you spend a bit of time "researching" this subject on the internet, you'll find quite a bit of stuff about blind listening tests.
There's a famous one from a magazine, Stereo Review, that had 25 "expert" listeners and compared lots of amps ranging from a cheapo Pioneer reciever to a multi-thousand pound tube amp. No statistically significant differences were detected.
(However, the test was set-up in such a way as to ensure that none of the amps was driven to clipping, and I dare say a high powered (and hence more expensive) amp driving a difficult load could be shown to be superior to a cheaper low powered one so I'm not suggesting that all amps sound the same in all circumstances.)
This is just one example of how "obvious" differences vanish in blind testing.
Or try this one: http://www.aes.org/e-lib/browse.cfm?elib=14195
So having read a lot about these kinds of tests and related stuff about streamers & DACs and how they should be indistinguishable, I've become rather sceptical about claims of "night & day" differences (I used to be a believer).
I don't know what to say about the R&D of hi-fi manufacturers. I did once read something apparently written by an audio designer who claimed to have once worked for a well known British company. He said he used to convince himself that his tweaks to his own designs made significant SQ improvements but later decided he was fooling himself.
Robin
- Login or register to post comments
- Flag as offensive
My point is that it is the engineers in competing companies, their R&D, experienced reviewers like our esteemed AE, discriminating enthusiast who spend money in a highly competitive market in the search for "the closest approach to the original sound" - it is those people who detect the differences.
bhg
- Login or register to post comments
- Flag as offensive
So having read a lot about these kinds of tests and related stuff about streamers & DACs and how they should be indistinguishable, I've become rather sceptical about claims of "night & day" differences (I used to be a believer).
Thanks for the link. And I have no doubt there are many like this, given the margins we are taking about with units of similar specs and price levels. I too am highly sceptical of "night and day" differences, but that is not what is at stake here. What you are defending is the argument that all units (doing the same function in a digital medium) must sound the same (with the exceptions you allude to). That means that for instance the cheapest off-the-shelf digital streamer is, in blind tests, indistinguishable from the most expensive and most highly rated unit in the world. That Linn, for another instance, has developed and is marketing four DS players at approx. £1k, £2k, £5k, and £11k which, in reality sound the same.
Given commercial rivalry between companies, the thirst of the media for controversy, it is inconceivable that such tests have not been undertaken. The fact that we have not heard of the scam and scandal such results would reveal means that either the tests didn't get the result you claim would happen, or that there is an industry and media-wide conspiracy of silence.
The differences are over-hyped, feed self-delusion, meaningful only at the margins, I grant you. But non-existent under all conditions? It's just not credible.
Vic.
- Login or register to post comments
- Flag as offensive
I've been following this very interesting debate from the sidelines and haven't commented because I've not had the opportunity to listen regularly enough to the more extreme exotic end of the hi-fi spectrum and am thus unable to indulge in meaningful comparisons (be it through blind testing or otherwise).
Logic and common sense dictates to me that, notwithstanding the validity of blind tests where appropriate and rigourously conducted, I tend towards agreeing with Vic. My Logitech Squeezebox delivers some great sounds, but I certainly wouldn't want to stick my neck out and claim that it sounded the equal of his Linn set-up. Similarly, I daresay (forgive me if this is an assumption too far, Vic) that he wouldn't claim there was no difference between his player and the absolute top of the range model. I think when one reaches that level, a variation on the law of diminishing returns must kick in at some stage.
There is one aspect of the blind test v 'just listening' methods that I can't recall being mentioned, and that is the time factor. Obviously the difference between a Dansette and an LP12 would be immediately obvious to most people, but sometimes the differences between pieces of kit of a similar level reveal themselves over time rather than as an instant revelation. Certainly I've experienced this with at least two amplifiers I can recall buying and I don't know how many different pairs of speakers. It used to be the same with cartridges when I played LPs. This is why these days I tend to look more favourably on reviews of the 'I've lived with this equipment for a few weeks now and .........' type rather than 'we sat in a room and compared 10 different amps/speajkers etc..'
Anyway, this long-winded post has not been written to add any flames to the fire of what is, as I say, a fascinating discussion.
JKH
- Login or register to post comments
- Flag as offensive
Following this lively discussion
on the issue of blind testing, surely the objective is to get as close as
possible to the original sound.
This reminds of a post to the Gramophone many years ago, long before the digital
age, by a contributor who claims to have heard a certain work at a live concert,
but on returning home found that his much prized Hi-Fi was better sounding. I believe
he was not referring to the interpretation, but to audio quality. At this point
I began to lose confidence in the sanity of the audiophile world.
I suggest that blind test panels
should be consist of musicians who are best able to assess how close the
electronically processed audio experience comes to the real thing. A panel of
5-10 judges should be statistically significant.
Awaiting the furor.
bhg
- Login or register to post comments
- Flag as offensive
I want to add a few points.
Firstly there are many ways of adding cost to an audio component which don't necessarily improve its sound quality. Just a few are:
1. Proprietary design
2. Discrete circuitry as opposed to off-the-shelf components (e.g. op-amps)
3. Exotic, "audiophile-grade" components
4. Fancy casework
5. Extra features
6. Inefficient manufacturing methods (small batch?)
Secondly, on the issue of long-term listening, I know of one at least one experiment where double-blind ABX comparison was found to be clearly superior to long-term at-home listening when trying to detect a small amount of deliberately added distortion.:
http://www.nousaine.com/pdfs/Flying%20Blind.pdf
Also, in reply to DKH, my argument only encompasses electronics, not transducers i.e. turntables and loudspeakers.
Robin
- Login or register to post comments
- Flag as offensive
That means that for instance the cheapest off-the-shelf digital streamer is, in blind tests, indistinguishable from the most expensive and most highly rated unit in the world.
So to clarify your position Phileas, the points you list above, and not sound quality, separate the two products I exemplify, do they? Sorry to knit-pick, but you are either stating there is no difference in sound quality between all (for instance) DS players or you are not. An unequivocal statement to that effect would be more definitive than anecdote and selected examples of tests (however convincing in themselves, admittedly).
Vic.
- Login or register to post comments
- Flag as offensive


We'll obviously never agree on this one but experiences like yours would tend to make me conclude that the apparent differences are illusory and that blind testing is necessary.
Robin