In the chapter Uncertainty in clinical medicine (author's copy), for the Handbook of the Philosophy of Medicine, Djulbegovic et al. describe three main sources of uncertainties when defining diseases. They are uncertainties in discriminating between "normal" and "abnormal", disease outcomes, and application of population data to the individual patient. These are derived from scientific studies in which measurement plays a major role. But measurement in science, as well as in clinical practice, is an exercise fraught with many complexities that depend on many factors, which include but not limited to assumptions, observations, instruments, theoretical underpinnings, etc. Some measurements we "learn" without actually acquiring a deep understanding of the measurement itself, take for example how we learn about the boiling point of water as illustrated by Hasok Chang,
We all learn at school that pure water always boils at 100°C (212°F), under normal atmospheric pressure. Like surprisingly many things that "everybody knows", this is a myth. We ought to stop perpetuating this myth in schools and universities and in everyday life: not only is it incorrect, but it also conveys misleading ideas about the nature of scientific knowledge. And unlike some other myths, it does not serve sufficiently useful functions.
There are actually all sorts of variations in the boiling temperature of water. For example, there are differences of several degrees depending on the material of the container in which the boiling takes place. And removing dissolved air from water can easily raise its boiling temperature by about 10 degrees centigrade.
The fickleness of the boiling point is something that was once widely known among scientists. It is quite easy to verify, as I have learned in the simple experiments that I show in this paper. And it is still known by some of today's experts. So actually the strange thing is: why don't we all hear about it? Not only that, but why do most of us believe the opposite of what is the case, and maintain it with such confidence? How has a clear falsehood become scientific common sense?
Measurement in medicine (clinical and research) for the most part uses what's known as the Gaussian distribution, which may be also be referred as the normal distribution. In 1920 Karl Pearson said of the Gaussian curve,
Many years ago I called the Laplace-Gaussian curve the normal curve, which name, while it avoids an international question of priority, has the disadvantage of leading people to believe that all other distributions of frequency are in one sense or another 'abnormal.' That belief is, of course, not justifiable. It has led many writers to try and force all frequency by aid of one or another process of distortion into a 'normal' curve.
R. C. Geary in 1947 wrote of normality,
Our historian will find significant change of attitude about a quarter-century ago following on the brilliant work of R. A. Fisher who showed that, when universal normality could be assumed, inferences of the widest practical usefulness could be drawn from samples of any size. Prejudice in favour of normality returned in full force and interest in non-normality receded to the background (though one of the finest contributions to non-normal theory was made during the period by R. A. Fisher himself), and the importance of the underlying assumptions was almost forgotten. Even the few workers in the field (amongst them the present writer) seemed concerned to show that 'universal non-normality doesn't matter': we so wanted to find the theory as good as it was beautiful. References (when there were any at all) in the text-books to the basic assumptions were perfunctory in the extreme. Amends might be made in the interest of the new generation of students by printing in leaded type in future editions of existing text-books and in all new text-books:
Normality is a myth; there never was, and never will be, a normal distribution.
This is an over-statement from the practical point of view, but it represents a safer initial mental attitude than any in fashion during the past two decades.
The Gaussian distribution is used widely not only in the investigation of disease, but also in clinical practice. In their book The Evidence Base of Clinical Diagnosis Knottnerus and Buntinx, with the help of Edmund Murphy's 1972 article The normal, and the perils of the sylleptic argument (PMID: 5040077), describe the pitfalls of using the Gaussian distribution in medicine,
A common “Gaussian” definition (fortunately falling into disuse) assumes that the diagnostic test results... will fit a specific theoretical distribution known as the normal or Gaussian distribution. Because the mean of a Gaussian distribution plus or minus 2 standard deviations encloses 95% of its contents, it became a tempting way to define the normal many years ago, and came into general use. It is unfortunate that it did, for three logical consequences of its use have led to enormous confusion and the creation of a new field of medicine: the diagnosis of nondisease. First, diagnostic test results usually do not fit the Gaussian distribution. (Actually, we should be grateful that they do not; the Gaussian distribution extends to infinity in both directions, necessitating occasional patients with impossibly high BNP results and others on the minus side of zero.) Second, if the highest and lowest 2.5% of diagnostic test results are called abnormal, then all the diseases they represent have exactly the same estimated frequency, a clinically nonsensical conclusion.
The third harmful consequence of the use of the Gaussian definition of normal is shared by its more recent replacement, the percentile. Recognizing the failure of diagnostic test results to fit a theoretical distribution such as the Gaussian, some laboratory specialists suggested that we ignore the shape of the distribution and simply refer (for example) to the lower (or upper) 95% of BNP or other test results as normal. Although this percentile definition does avoid the problems of infinite and negative test values, it still suggests that the underlying prevalence of all diseases is exactly the same, 5%, which is silly and still contributes to the “upper-limit syndrome” of nondisease because its use means that the only “normal” patients are the ones who are not yet sufficiently worked up. This inevitable consequence arises as follows: if the normal range for a given diagnostic test is defined as including the lower 95% of its results, then the probability that a given patient will be called “normal” when subjected to this test is 95%, or 0.95. If this same patient undergoes two independent diagnostic tests (independent in the sense that they are probing totally different organs or functions), the likelihood of this patient being called normal is now (0.95) × (0.95) = 0.90. So, the likelihood of any patient being called normal is 0.95 raised to the power of the number of independent diagnostic tests performed on them. Thus, a patient who undergoes 20 tests has only 0.95 to the 20th power, or about 1 chance in 3, of being called normal; a patient undergoing 100 such tests has only about 6 chances in 1,000 of being called normal at the end of the workup.
Knottnerus and Buntinx go on to describe other pitfalls under the assumption of "normal" in medicine. What normal is, its philosophical foundations, and how it is measured should be constantly be questioned. This is really not that different than Hasok Chang's endeavor of questioning the long-held belief and teaching of the boiling point of water. Knottnerus and Buntinx give a fair warning about having a misunderstanding of what normal means in medicine:
Tweet to @jvrbntz[W]e need to acknowledge that several different definitions of normal are used in clinical medicine, and we confuse them at our (and patients’) peril.
No comments:
Post a Comment
1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.
2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).
3. You should mention anything you have learned from your target.
4. Only then are you permitted to say so much as a word of rebuttal or criticism.
Daniel Dennett, Intuition pumps and other tools for thinking.
Valid criticism is doing you a favor. - Carl Sagan