The development of disease classification systems and the purpose of assigning a diagnosis are processes that serve multiple functions in healthcare. Thomas B. Newman and Michael A. Kohn describe these dynamic processes in their book Evidence-Based Diagnosis in the following manner,
We use the term “disease” for a health condition that is either already causing illness or is likely to cause illness relatively quickly in the absence of treatment. If the disease is not currently causing illness, it is presymptomatic.
[...]
Assigning each illness a diagnosis is one way that we attempt to impose order on the chaotic world of signs and symptoms, grouping them into categories that share various characteristics, including etiology, clinical picture, prognosis, mechanism of transmission, and response to treatment. The trouble is that homogeneity with respect to one of these characteristics does not imply homogeneity with respect to the others. So if we are trying to decide how to diagnose a disease, we need to know why we want to make the diagnosis, because different purposes of diagnosis can lead to different disease classification schemes.
The creation, distribution, and utilization of medical knowledge are processes in the healthcare system that originate from multiple sources. These imperfect processes generate knowledge that is uncertain. Like medicine the scientific endeavour is imperfect and its knowledge is also uncertain. Both have common practices which include, but not limited to, theories, models, measurement, observations, and inferences to investigate nature. Karl Popper, a philosopher of science, said the following about objectivity, certainty, uncertainty, and truth in science,
The central features of scientific knowledge are as follows:
1. It begins with problems, practical as well as theoretical.
2. Knowledge consists in the search for truth — the search for objectively true, explanatory theories.
3. It is not the search for certainty. To err is human. All human knowledge is fallible and therefore uncertain. It follows that we must distinguish sharply between truth and certainty. That to err is human means not only that we must constantly struggle against error, but also that, even when we have taken the greatest care, we cannot be completely certain that we have not made a mistake.
To combat the mistake, the error, means therefore to search for objective truth and to do everything possible to discover and eliminate falsehoods. This is the task of scientific activity. Hence we can say: our aim as scientists is objective truth; more truth, more interesting truth, more intelligible truth. We cannot reasonably aim at certainty. Once we realize that human knowledge is fallible, we realize also that we can never be completely certain that we have not made a mistake.
[...]
Since we can never know anything for sure, it is simply not worth searching for certainty; but it is well worth searching for truth; and we do this chiefly by searching for mistakes, so that we can correct them.
Unfortunately, our human nature is to crave certainty, this in turn may complicate the process understanding medical knowledge along with its use in diagnosis and treatment. In a 1989 article titled Our Stubborn Quest for Diagnostic Certainty: A cause for excessive testing Jerome Kassirer wrote:
Absolute certainty in diagnosis is unattainable, no matter how much information we gather, how many observations we make, or how many tests we perform. A diagnosis is a hypothesis about the nature of a patient's illness, one that is derived from observations by the use of inference.1-3 As the inferential process unfolds, our confidence as physicians in a given diagnosis is enhanced by the gathering of data that either favor it or argue against competing hypothesis. Our task is not to attain certainty, but rather to reduce the level of diagnostic uncertainty enough to make optimal therapeutic decisions.4-7
[...] The evidence provided by each new test may argue against the most likely diagnosis hypothesis, or the test result may be falsely positive or negative.
The more information we get, the more confidence in the validity of our diagnoses we feel, even when such confidence may not be justified on the basis of the information obtained.8,9 Of course, the more tests we perform, the higher the risk to the patient: we often find ourselves performing a cascade of risky tests when a set of results is abnormal or ambiguous.
Interesting enough even sound decision making does not guarantee certainty as pointed out by Djulbegovic and Elqayam:
[R]ationality does not guarantee that every single decision would be error-free; on the contrary, rational decision-making takes into account the consequences of possible errors—false negatives and false positives—to aid in arriving at desirable outcomes. Rationality is often defined as acting in a way that helps us achieve our goals,[5, 8] which in clinical setting typically means desire to improve our health. Most major theories of choice agree that our goals are best achieved if we take into account both benefits (gains) and harms (losses) of alternative courses of actions, which in medical context (as in everyday life) often occur under conditions of uncertainty.
Kassirer lists a few causes of this intolerance to uncertainty:
Why are physicians uneasy with uncertainty? First, we have been taught to think categorically. When we try to think in terms of probabilities, we often falter. We disregard uncertainty or behave as if it does not exist27; we use inexact expressions such as "probable," "occasional," and "likely" to describe the chance nature of events, complications, and efficacies of treatment24,28,29; we judge likelihood of disease and outcomes erroneously30-33; and we combine data on probabilities inaccurately.34,35 Our shunning of probability-oriented thinking is reflected in our textbooks, which are rife with absolutes and increasingly display flow charts with multiple branches that lead the reader down one of a few simplified paths toward a diagnosis. Finally, our virtual freedom to date in the use of tests and procedures has accustomed us to levels of diagnostic certainty higher than are required for optimal decision making. For some of us, ratcheting down from this level of confidence may cause substantial concern.
Science might not have an answer for everything, but it is our most successful invention to understand how the world works. It has built-in mechanisms of correcting itself when done well and with integrity. Problems of illusions may arise when practices only use the products of science without understanding how it works. This is the problem with understanding what role uncertainty plays in science and medicine as noted by Kassirer and others. Although there is not a specific criteria of how science works, it is not an "anything goes" endeavor. Great minds have addressed this problem and have found characteristics that distinguish science from other practices. Medicine would benefit from learning about the problem of demarcation if it's also interested in learning how it works.
Tweet to @jvrbntz
No comments:
Post a Comment
1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.
2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).
3. You should mention anything you have learned from your target.
4. Only then are you permitted to say so much as a word of rebuttal or criticism.
Daniel Dennett, Intuition pumps and other tools for thinking.
Valid criticism is doing you a favor. - Carl Sagan