Richard Feynman described pseudoscience as follows:
In the South Seas there is a Cargo Cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they’ve arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas—he’s the controller—and they wait for the airplanes to land. They’re doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No airplanes land. So I call these things Cargo Cult Science, because they follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential, because the planes don’t land.
Now it behooves me, of course, to tell you what they’re missing. But it would he just about as difficult to explain to the South Sea Islanders how they have to arrange things so that they get some wealth in their system. It is not something simple like telling them how to improve the shapes of the earphones. But there is one feature I notice that is generally missing in Cargo Cult Science. That is the idea that we all hope you have learned in studying science in school—we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.
Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.
In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.
Warning signs of pseudoscientific practices:
- A tendency to invoke ad hoc hypotheses, which can be thought of as "escape hatches" or loopholes, as a means of immunizing claims from falsification.
- An absence of self-correction and an accompanying intellectual stagnation.
- An emphasis on confirmation rather than refutation.
- A tendency to place the burden of proof on skeptics, not on proponents, of claims.
- Excessive reliance on anecdotal and testimonial evidence to substantiate claims.
- Evasion of the scrutiny afforded by peer review.
- Absence of "connectivity" (Stanovich, 1997), that is, a failure to build on existing scientific knowledge.
- Use of impressive-sounding jargon whose primary purpose is to lend claims a facade of scientific respectability.
- An absence of boundary conditions (Hines, 2003), that is, a failure to specify the settings under which claims do not hold.
Another list of pseudoscientific practices:
- Belief in authority: It is contended that some person or persons have a special ability to determine what is true or false. Others have to accept their judgments.
- Unrepeatable experiments: Reliance is put on experiments that cannot be repeated by others with the same outcome.
- Handpicked examples: Handpicked examples are used although they are not representative of the general category that the investigation refers to.
- Unwillingness to test: A theory is not tested although it is possible to test it.
- Disregard of refuting information: Observations or experiments that conflict with a theory are neglected.
- Built-in subterfuge: The testing of a theory is so arranged that the theory can only be confirmed, never disconfirmed, by the outcome.
- Explanations are abandoned without replacement. Tenable explanations are given up without being replaced, so that the new theory leaves much more unexplained than the previous one. (Hansson 1983)
When reading scientific studies it is important not just to appraise what we read, but also appraise the entire field.
Tweet to @jvrbntz
No comments:
Post a Comment
1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.
2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).
3. You should mention anything you have learned from your target.
4. Only then are you permitted to say so much as a word of rebuttal or criticism.
Daniel Dennett, Intuition pumps and other tools for thinking.
Valid criticism is doing you a favor. - Carl Sagan