Jun 6, 2017

Epistemology: uncertainty in assessing evidence and inquiry

Guyatt and Djulbegovic write:

On the surface, EBM proposes a specific association between medical evidence, theory, and practice. EBM does not, however, offer a new scientific theory of medical knowledge,16,17 but instead has progressed as a coherent heuristic structure for optimising the practice of medicine, which explicitly and conscientiously attends to the nature of medical evidence. Central to the epistemology of EBM is that what is justifiable or reasonable to believe depends on the trustworthiness of the evidence, and the extent to which we believe that evidence is determined by credible processes.17 Although EBM acknowledges a role for all empirical observations, it contends that controlled clinical observations provide more trustworthy evidence than do uncontrolled observations, biological experiments, or individual clinician’s experiences.


The basis for the first EBM epistemological principle is that not all evidence is created equal, and that the practice of medicine should be based on the best available evidence. The second principle endorses the philosophical view that the pursuit of truth is best accomplished by evaluating the totality of the evidence, and not selecting evidence that favours a particular claim.

How do we learn the evidence and how do we go about learning it? Susan Haack writes:


In 1843, John Stuart Mill wrote that "[t]he business of the magistrate, of the military commander, of the physician, of the agriculturalist, is merely to judge of evidence, and act accordingly."3 He's right. In fact, we all need to "judge of evidence, and to act accordingly" —in deciding what to eat, whom to trust, whether to undergo a suggested medical treatment, etc. We can't act safely or effectively unless we have some idea what is likely to happen if we do this or that; which requires going with such evidence as we have, or can obtain. Often, also, we need to consider the sources of our evidence, and the possibility that it has been impoverished or distorted as it was passed along; and to discriminate well- from poorly-conducted inquiry— and good-faith efforts to discover the truth from attempts to minimize a scandal or frame a convenient suspect.

Our so-called "Age of Information" is marked, not only by a growing dependence on electronic media and gadgetry for disseminating information, but also by an unprecedented flood of information itself; and by a growing sense that social policies... should be based on knowledge of their benefits, and their costs. it is indeed desirable that social, like individual, decisions be informed by whatever we can find out about the likely consequences of doing this or that—or doing nothing. We shouldn't forget, though, that factual information alone can't tell us what policies to adopt: what the costs and benefits are of damming this river [...], is one thing; whether the benefits outweigh the costs is quite another. Nor should we forget that acquiring information takes effort and, often, money; or that, as the hunger for information grows, not only more and more information, but also more and more misinformation, becomes available; and not only more and more research, but also more and more pseudo-research, is conducted. It gets harder and harder to sift the good stuff and the dreck.


The discipline to which it falls to articulate what distinguishes genuine inquiry from pseudo-inquiry, what makes inquiry better- or worse-conducted, evidence stronger or weaker etc., is the philosophical theory of knowledge, known in the trade as "epistemology" —a charmless and off-putting word for what is too often, I'm afraid, a charmless and off-putting enterprise. [...] Nevertheless, if you want to understand such vital practices as assessing the worth of evidence and the quality of inquiry, epistemology is what you need.


Genuine inquiry is an attempt to discover the truth of some question. This means, not that scientists, historians, etc., seek The Truth, in a quasi-religious sense [...]

A serious inquirer will seek out all the evidence he can, and do his best to assess whether it warrants this conclusion or that, or is insufficient to warrant any conclusion at all. But someone who already knows what conclusion he intends to reach, and is looking for evidence that supports it—and for ways to disguise or play down evidence that points elsewhere—isn’t really inquiring; for it is part of the meaning of the word “inquire” that you don’t know how things will turn out.17 [...]


[...] [O]ur wishes, hopes, and fears can affect our judgment of evidence, but they are not themselves evidence. Evidence consists, rather, of what we see, hear, etc. (experiential evidence) and background information (reasons); which, as I argued in Evidence and Inquiry,19 work together rather like clues and already-completed entries in a crossword puzzle.

Evidence may be better or worse; and whether, and if so, to what degree, a claim is warranted depends on how good the evidence is with respect to that claim. Reasons ramify, like crossword entries; and what makes evidence better or worse is analogous to what makes a crossword entry more or less reasonable: how supportive it is (analogue: how well an entry fits with its clue and already completed entries); how secure it is, independent of the claim in question (how reasonable the already completed entries are); and how comprehensive it is, how much of the relevant evidence it includes (how much of the crossword has been completed). As this third clause reveals, if your evidence is too sketchy, you’re not entitled to believe either way—which is no doubt why the English word “partial” has its two meanings: “incomplete,” and “biased.” As it also reveals, that we have no evidence that p doesn’t mean that we have evidence that not-p.


Moreover, our evidence is often second-hand:21 [...] We can’t get by without relying on evidence passed on by others; so we can’t avoid the necessity, not only of judging how likely it is that they are telling the truth as they believe it to be, but also of judging how adequately they judge the evidence they have.


But for all the complexity of modern life, we humans are still—well, only human. When we need to look into difficult questions, it is always tempting to cut corners; and even with the best will in the world it can be very hard to figure out where complex or ambiguous evidence points. And as Denis Diderot long ago reminded us, man is made up “of strength and weakness, of insight and blindness, of pettiness and grandeur.”22 Yes, we are capable of remarkable cognitive achievements—but too often we are lazy, and jump to conclusions; too often we are biased, and ignore or conveniently forget evidence that points to facts we find unpalatable; and too often we seize on inadequate evidence that confirms our fears or serves our interests.

Scientists, too, are only human, with the same perceptual and cognitive weaknesses and limitations as the rest of us, and the same tendencies to corner-cutting and wishful or fearful thinking. Over time, however, the sciences have developed tools to overcome perceptual and cognitive limitations[...]; and internal social mechanisms by means of which the natural-scientific community, at least, has managed to keep most of its members, most of the time, reasonably honest — an ethos that rewards real achievement, encourages evidence-sharing, and discourages cheating, as well as more formal mechanisms like the peer-review process for distributing research funds and screening publication.23

But while those technical scientific tools generally get better and better, the social mechanisms keeping scientists honest do not; in fact, they are now under severe strain as scientists find themselves ever more urgently required to get grants, to publish, etc.[...]


As we have seen, there are many ways to get into epistemological trouble: misconstruing what the evidence is, or what is relevant to what; focusing on readily available evidence and forgetting about other potentially relevant evidence we don’t have; misjudging how well the evidence we have warrants a conclusion, perhaps allowing our wishes or our fears to color our judgment; failing to realize that information has been lost or distorted in the transmission process, or that those on whom we are relying have allowed their judgment of the weight of evidence to be colored by their hopes or fears; or simply being reluctant to admit that we were mistaken, or that we just don’t know.


What about "evidence-based medicine"? This certainly sounds like a good thing — who wouldn't prefer to know before they take it that this medicine will make them better, and won't kill them in the process? And indeed, evidence-based medicine is a good thing—if "evidence-based" means taking into account all the relevant evidence we have, or can obtain.' But things go wrong when the entirely reasonable idea that we should prefer medical treatment which there is evidence to believe is both effective and safe is covertly transmuted into the much less reasonable idea that we should prefer medical treatments supported by a restricted kind of evidence—epidemiological studies and clinical trials. This is classic bait-and-switch: first appeal to our sense that evidence matters, and then covertly allow only evidence of certain preferred kinds.

Epidemiological evidence and clinical trials aren't the only evidence relevant to assessing the value of a medical treatment. Information about, e.g., the effects of a drug on animals is also relevant; as are physicians' observations of which patients respond well to a treatment, and which badly or not at all—which can complement the evidence about large classes of people that epidemiological studies and clinical trials provide with evidence of possibly-relevant individual variations. And epidemiological studies and clinical trials aren't always good evidence, either, but may be flawed in design, execution, or both. [...]

A busy physician, if he doesn't simply rely on what drug-company reps tell him, probably reads at most the abstracts of papers in the medical journals; and may simply assume that the peer-review process will screen out poor work.30 [...]

[...] The scientific peer-review process is, probably, a somewhat better quality-control mechanism than, say, the philosophical peer-review process, but it too is vulnerable to corruption[...]

"There's a lot of work to be done" is an understatement.

Boldface text represent my emphasis

No comments:

Post a Comment

1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.
2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).
3. You should mention anything you have learned from your target.
4. Only then are you permitted to say so much as a word of rebuttal or criticism.
Daniel Dennett, Intuition pumps and other tools for thinking.

Valid criticism is doing you a favor. - Carl Sagan