![]() |
What a sad excuse for a group this is...
In article , Malcolm
wrote: On Fri, 28 Dec 2007 10:08:27 +0000, Jim Lesurf wrote: In article , Malcolm The problem is that many (most?) such tests fail to reliably distinguish between A and B. One cannot, in that case, say that A and B are the same. That's a logical fallacy. Indeed. That is why the conclusion would not be that they are "the same". Only that the evidence from the test indicated them to be audibly indistinguishable when compared. Absolutely correct - as long as you add the rider that they are audibly indistinguishable under the conditions of said test - whatever they may be. Indeed. And we can then go on to link this with the other evidence we have from a range of similar tests with various test conditions, choices of test subjects, etc. So by using an array of various tests we extend the extent of the types of situations and subjects for which we can take the conclusion to be reliable as a guide. So the "flaw" seems to be that you wish to draw inappropriate conclusions from a test intended for another purpose. This isn't a "flaw" in the test, but in your inappropriate use of the results. The "logical fallacy" is in the way you present an inappropriate conclusion and bypass the appropriate one. :-) No, it doesn't matter what the purpose of the test is Actually it is vital to understand the purpose of the test, for the reason you posting illustrated - that otherwise you can draw inappropriate and thus misleading 'conclusions' which are not due to any "flaw" in the test. - the results (such as they are) stand. The only only "logical fallacy" I'm referring to is that those that say that a test that fails to find a difference between A and B "proves" that A and B are identical. Which as I pointed out, it not a flaw in the test, but in the 'result' you stated as being taken from it. Thus if the above is the "fundamental flaw" you were referring to, then I am afraid it is in your understanding, not in the tests. :-) If listeners in a "test" situation cannot distinguish the sound between one system and another then one cannot assume that those same listeners in another situation (for the sake of argument a "home" situation) would also not be able to distinguish between the systems. Nor can we assume that they *can* do so simply on the basis that they express such a belief. What we can do (and people have done) is run tests in a variety of situations looking for any cases or circumstances that show themselves as allowing for a change in audibility. So far as I know, the results we have seem to show no sign that we have evidence for the assumption that they can do so in a 'home' situation when not being tested. The problem, of course, is that when said listeners claim such differences, some "scientists" say that that is nonsense and that if they (the listeners) hear such differences at home, then they must surely be able to hear the same differences under test/laboratory conditions. Part of the problem here is your vague label of "scientists". This allows you to attack a grouping of that lable of your choice. But it tells us little about the evidence, or what people can, or cannot, actually distinguish by sound. Thus you present a false dichotomy, on one side your lable "some scientists" say something is 'nonsense', and on the other people say it is 'true', no doubt. Yet the situation is that we have no reliable evidence to support the assertion that people *can* hear 'at home' what they then fail to show they can hear in a test. What we do have is a variety of test evidence which shows that in some cases people did distingush one thing from another, in other cases they did not. In science we base our understanding on evidence, not on speculations. If you wish to propose the hypothesis that people can hear 'at home' things which they can't in other situations then it is open to you to run a suitably controlled test in a 'home' to see what the evidence then shows. But so far as I recall, some of the tests *have* been in people's homes, and in circumstances they felt would be fine for them. So I see no obvious reason to assume in advance that your test would show a fresh result. But please do a test and report the details and we can then assess the results along with examining the details of the test method, etc. Until then you are speculating beyond (or in conflict with) the body of evidence we have. Since the two situations are fundamentally different and that what is being "measured" is one aspect of human perception there is a bit of a problem in asserting that the "home" vis-a-vis the "laboratory" situation will have no affect on the perception itself. As above. You would need to define this "fundamental difference" in a way that that then allowed you to test your hypothesis. If you can't do that, then you are simply offering a vague speculation as a possible 'excuse'. All kinds of things can be speculated or claimed to be 'possible' in some vague way. Teapots beyond the moon. Mediums who excuse their failure to produce ectoplasm at a seance because "The vibrations were not right due to the presence of skeptics in the room", etc, etc. I am afraid none of that is science. For that your ideas need to be backed up with a doable test, to do the test, and then decide on the basis of how reliable the resulting evidence is as either support for, or refutation of, your hypothesis. Personally, I think that if a "well" (and there are very few of those) conducted listening test fails to show a difference between two systems/ components, then the differences (if any) are probably not worth worrying too much about. I also have a similar view, but agree with your other implied point that what may pass unnoticed or be trivial in some situations might matter more in others. So have to go with the evidence rather than personal views or speculations beyond the evidence. :-) However, I see no problem whatsoever in anyone conducting their own home listening tests and deciding on the basis of those tests that one item is better than the other. It is the height of arrogance for anyone to claim that they are "wrong". Not if we can show good reasons (supported by evidence) that their test did have a flawed method, or that they failed to do the tests well enough for the results to have any significant level of reliability. Also not if there are good reasons to doubt their conclusions have any worth for anyone else, so may be misleading if given as a 'guide' or a 'review' or 'evidence' for others. All depends on the details of the case. The point here is that there are various well-evidenced mechanisms that can cause someone to percieve a 'difference' which is not the one asserted by someone making a claim. So although we can't tell them they 'wrong' when they say they heard a difference, we can often have a firm evidential basis for saying that they are probably 'wrong' in the *reasons* they assert as being the 'cause' of that difference as their test/comparison did not deal with these factors which routinely arise. As I assume you are aware, the literature on topics like the physiology of hearing does deal with such matters, and this is why academic run tests routinely take this into account. But casual home listening normally does not, so can easily lead to quite unfounded 'conclusions' by those involved. Ditto for a variety of well established acoustic and physical factors like slight movements altering the room acoustic, etc. Speculations about what "might be so" are useful in science, but they have no real value until tested and then we can decide on the results, not on how plausible or attractive the speculation sounds. And if a speculation can't be tested, then it is not 'science'. In such cases we can simply make use of the simplest ideas that fit the evidence. No need for any needless 'mechanisms' or 'effects'. Occam. I am afraid that people believe all kinds of things, often contradicting each other. So we can't rely on what people assert as their 'belief'. Sincerity is no warranty of accuracy of what people believe being correct. Slainte, Jim -- Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm Audio Misc http://www.audiomisc.co.uk/index.html Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html |
What a sad excuse for a group this is...
In article , Laurence Payne
NOSPAMlpayne1ATdsl.pipex.com wrote: On Fri, 28 Dec 2007 13:10:14 -0600, Malcolm wrote: Personally, I think that if a "well" (and there are very few of those) conducted listening test fails to show a difference between two systems/ components, then the differences (if any) are probably not worth worrying too much about. However, I see no problem whatsoever in anyone conducting their own home listening tests and deciding on the basis of those tests that one item is better than the other. It is the height of arrogance for anyone to claim that they are "wrong". It shouldn't have to be "well" conducted if, as disciples claim, cables make an immediate and striking difference. I must confess I do wonder how many of the people who make assertions like that about cables really understand how variable hearing itself it, and how many things can affect the perception - thus leading to an unsupported conclusion unless dealt with in a comparison. My impression is that few do. Slainte, Jim -- Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm Audio Misc http://www.audiomisc.co.uk/index.html Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html |
What a sad excuse for a group this is...
So if you wish to learn, then the standard academic science methods of
a literature search and doing your own experiments are yours to take up. :-) Slainte, Jim Yes, I'm well aware of all the above. Since I teach an MA course it's my job to mark such research and oversee research methods. I've often been curious about setting up proper listening tests, just to test various hypotheses. I've never actually done so - HiFi is not my "job" as a psychologist (performance and media health is my area) - just a strong hobby interest. One day maybe, but it's a low priority. What I try to do as much as possible is to involve several people in listening tests. In the London Audiocircle Circle we have frequent meets where we do comparative tests. We find this very helpful - first we use our own ears in building, then we bring our various builds to the common arena. Sometimes there's no conclusive preference, sometimes a unanimous preference. On the few occasions we've done blindfold tests it has also happened that we can't tell one thing from another - one being a vinyl versus CD front end! There again, on other occasions we can. Although we learn from these listening tests and the results feed back into how we build, none of the above has been set up scientifically - I don't think any of us is fanatical enough to need to do so, and nobody is in the business of publishing such results. So as of now we have nothing to contribute academically. Maybe one day somebody will come along that is sufficiently motivated to do so, but for now we're too busy building, plus most of us have other jobs that take up time. Interesting thought, though. |
What a sad excuse for a group this is...
"Andy Evans" wrote in message
... - HiFi is not my "job" as a psychologist (performance and media health is my area) - What is your area? The words "performance", "media" and "health" all individually mean something to me, but I cannot fathom out what "performance and media health" means at all. David. |
What a sad excuse for a group this is...
David Looser wrote:
"Andy Evans" wrote in message ... - HiFi is not my "job" as a psychologist (performance and media health is my area) - What is your area? The words "performance", "media" and "health" all individually mean something to me, but I cannot fathom out what "performance and media health" means at all. David. First google hit: http://www.performanceandmedia.co.uk/ |
What a sad excuse for a group this is...
What is your area?
The words "performance", "media" and "health" all individually mean something to me, but I cannot fathom out what "performance and media health" means at all. David. First google hit: http://www.performanceandmedia.co.uk/- Hide quoted text - - Show quoted text - Yes, as above - I run the MA course at Thames Valley University, see also www.performanceandmediahealth.com It covers the psychology of performing (motivation, stage fright, the Zone etc), medical problems of performers, creativity and the creative therapies and more recently the health aspects of new media - games addiction, computer addiction, video nasties and censorship, VR and its implications, online social activities etc. It's the first MA of its kind, and has been going well. Andy |
What a sad excuse for a group this is...
Jim Lesurf wrote:
In article , Andy Evans wrote: However, if we test and compare two items or systems and find that the listeners can't distinguish the sound using one from using the other, then we have evidence that they need not take assumptions that they "sound different" seriously when commenting on the items or systems. *Unless* some other appropriately run test shows other results in the form of evidence that can be assessed. I don't think you wrote the above, Andy, despite taking the credit for it. It would help if you were to identify when you quote in the standard manner. However... I think the difficulty here is that "listeners" is a variable and so is "test conditions". The test conditions would be not too difficult to replicate, but the listeners could not be easily replicated, nor could their emotional/health states at time of testing, even if they were. This is why a number of different such tests have been done, using varied listeners, and various situations. As the evidence rolls in, this gives some statistical scope to the reliability of the results. Your objections have been thought of, and repeatedly dealt with, over some decades. I would hazard a guess that the quality, aural acuity and perceptual sensitivity of a listening panel could not be easily standardised, and since the whole experiment depends on their aural perception, I'd forsee this as a logistical problem. How would you suggest tackling this in logistical terms? Since you didn't bother to reference who you were quoting, you'd have to say who you are asking, and why they should do what you ask. :-) However... I/we don't need to "suggest" anything as people working on the topic have *already* tackled the problems you raise, as indicated above. The tests already done cover a range of cases and listeners, and there is the tendency for the results to show that - regardless of beliefs to the contrary - people often show no ability to hear the 'differences' they assert they can. I lost count some years ago of how many different such tests have been done using different groups of listeners, etc. People have been doing them for over two decades to my knowledge. Similarly, there are cases when listeners *can* distinguish one thing from another and do so with statistical reliability, e.g. where the comparison is for a large enough difference in level, or frequency response. If you randomise the panel, this would not correspond to audiophile listeners. People have, as I point out, used both various 'audiophile' groups, and other groups. So far as I know, the results are fairly consistent for specific classes of items under examination - e.g. between amps. They indicate what can, and cannot, be heard with any reliability in various cases, by a range of people. Maybe you would need to randomise a sample of audiophiles who had already been tested for good hearing. Whether you would consider musicians and audiophiles as equivalent would, additionally, truly set the cat among the pidgeons. It is, of course, open to you and anyone else to run their own properly conducted tests, and report the results. No need for any "maybe" or speculations which are unsupported by the evidence we already have. So, for example, if you think a specific factor matters, or that some people are 'golden eared' then you can test your theory and see if the evidence supports it. However if you check the history of what already has been done you may well find that someone else has already tried the hypothesis you have in mind, and found it didn't stack up when tested. So if you wish to learn, then the standard academic science methods of a literature search and doing your own experiments are yours to take up. :-) Could you or anyone give me a clue here - an author perhaps? I've just read something by Marc Perlman* - but I shouldn't think it's up your street! Rob * Marc Perlman (2004) Golden Ears and Meter Readers: The Contest for Epistemic Authority in Audiophilia; 34; 783, Social Studies of Science |
What a sad excuse for a group this is...
"Andy Evans" wrote in message ... 'Rock and roll' has a lot more energy, life and vibrancy than 'classical' ever will. Energy - Shostakovich, Liszt, Vivaldi Life - what classical music doesn't have life Vibrancy - Tchaikovsky, Prokofiev, Bizet, Verdi, Berlioz etc. Besides, don't you tire of hearing a zillion mildly different performances of the same old music that's been going around for centuries ? I tire of hearing a zillion different girl bands singing "baby baby" and a zillion rock bands with fuzz guitar, loud drums and lyrics that could be written by robots. Ditto - especially anything with the word 'lurve' in it.... |
What a sad excuse for a group this is...
"Rob" wrote * Marc Perlman (2004) Golden Ears and Meter Readers: The Contest for Epistemic Authority in Audiophilia; 34; 783, Social Studies of Science :-) |
What a sad excuse for a group this is...
Keith G wrote: "Andy Evans" wrote 'Rock and roll' has a lot more energy, life and vibrancy than 'classical' ever will. Energy - Shostakovich, Liszt, Vivaldi Life - what classical music doesn't have life Vibrancy - Tchaikovsky, Prokofiev, Bizet, Verdi, Berlioz etc. Besides, don't you tire of hearing a zillion mildly different performances of the same old music that's been going around for centuries ? I tire of hearing a zillion different girl bands singing "baby baby" and a zillion rock bands with fuzz guitar, loud drums and lyrics that could be written by robots. Ditto - especially anything with the word 'lurve' in it.... Very little of what I listen to has either of those. You're talking about POP not real rock music. Graham |
All times are GMT. The time now is 11:50 AM. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
SEO by vBSEO 3.0.0
Copyright ©2004-2006 AudioBanter.co.uk