Wenzel writes for the NEJM,
Given the sometimes elusive and often provisional nature of scientific truth, we need to emphasize that our books are vastly incomplete and that current concepts represent only a temporary resting place for understanding, continually requiring testing and further analyses. They are not the final word but a brief stop on the path we seek: truth through science.
Textbooks play an important role in the educational system, but how often are their content thoroughly examined? A lot of work goes into writing the chapters, but the methodology of how articles were selected and conflict of interests are not explicitly stated in each chapter. There are other problems which may not be addressed in textbooks; such as the publication bias, replication crisis, and harking to name a few. Textbooks may be good resources, especially for background knowledge, but their limitations need to be acknowledged. Although background knowledge is important it is insufficient for clinical decision making. Here's what Alvan Feinstein stated in an article for the NEJM in 1970,
The argument is sometime offered that clinical science is obtained by translation of the concepts of "basic science" to the activities of the bedside. This type of translation is excellent for the decisions with which physicians explain mechanisms of disease, but it cannot solve our scientific problems in managerial decisions, because the strategies and tactics of contemporary "basic science" do not provide the appropriate methods, and are not suitable for the material with which doctors work in the managerial clinical activities of the bedside.
[...]
If we widen the concept of fundamental scientific research to include basic problems in the management of patients as well as in mechanisms of disease, if we restore an investigative balance that allows clinical science to be made with clinical skills as well as with laboratory technology, and if we recognize that there are two kinds of basic science rather than one, we may be able to rescue both clinical medicine and basic science from the intellectual degeneration that will come to both if, in the name of clinical science, a rigorous but often inappropriate molecular biology is fused with a sentimental and equally inappropriate form of social work.
Not only is background knowledge insufficient for clinical decision making, but along with other kinds of medical knowledge, and as history would show, knowledge is always changing. This has been long recognized in epistemology, a branch of philosophy concerned with the theory of knowledge,
Defined narrowly, epistemology is the study of knowledge and justified belief. As the study of knowledge, epistemology is concerned with the following questions: What are the necessary and sufficient conditions of knowledge? What are its sources? What is its structure, and what are its limits? As the study of justified belief, epistemology aims to answer questions such as: How we are to understand the concept of justification? What makes justified beliefs justified? Is justification internal or external to one's own mind? Understood more broadly, epistemology is about issues having to do with the creation and dissemination of knowledge in particular areas of inquiry.
While some scientists and philosophers accept the fact knowledge is fallible and adopt practices to correct itself, others, especially non-scientists and non-philosophers, might be more reluctant of having this attitude. One philosopher who studied scientific knowledge extensively was Karl Popper and said the following of knowledge,
Falsificationists (the group of fallibilists to which I belong) believe — as most irrationalists also believe — that they have discovered logical arguments which show that the programme of the first group cannot be carried out: that we can never give positive reasons which justify the belief that a theory is true. But, unlike irrationalists, we falsificationists believe that we have also discovered a way to realize the old ideal of distinguishing rational science from various forms of superstition, in spite of the breakdown of the original inductivist or justificationist programme. We hold that this ideal can be realized, very simply, by recognizing that the rationality of science lies not in its habit of appealing to empirical evidence in support of its dogmas — astrologers do so too — but solely in the critical approach-in an attitude which, of course, involves the critical use, among other arguments, of empirical evidence (especially in refutations). For us, therefore, science has nothing to do with the quest for certainty or probability or reliability. We are not interested in establishing scientific theories as secure, or certain, or probable. Conscious of our fallibility we are only interested in criticizing them and testing them, in the hope of finding out where we are mistaken; of learning from our mistakes; and, if we are lucky, of proceeding to better theories.
For some this much uncertainty may be too overwhelming, but even when not acknowledged it is present in daily practice. In regards to the diagnostic process and trying to obtain absolute certainty Jerome Kassirer, a former editor of the NEJM, said the following in an 1989 article,
Absolute certainty in diagnosis is unattainable, no matter how much information we gather, how many observations we make, or how many tests we perform. A diagnosis is a hypothesis about the nature of a patient's illness, one that is derived from observations by the use of inference.1-3 As the inferential process unfolds, our confidence as physicians in a given diagnosis is enhanced by the gathering of data that either favor it or argue against competing hypothesis. Our task is not to attain certainty, but rather to reduce the level of diagnostic uncertainty enough to make optimal therapeutic decisions.4-7
There are different approaches to knowledge and David Deutsch, a physicist who has been heavily influenced by the Karl Popper's work, explains the differences between fallibilism and empiricism in his book The Beginning of Infinity in the following manner,
The deceptiveness of the senses was always a problem for empiricism – and thereby, it seemed, for science. The empiricists’ best defence was that the senses cannot be deceptive in themselves. What misleads us are only the false interpretations that we place on appearances. That is indeed true – but only because our senses themselves do not say anything. Only our interpretations of them do, and those are very fallible. But the real key to science is that our explanatory theories – which include those interpretations – can be improved, through conjecture, criticism and testing.
[...]
The misconception that knowledge needs authority to be genuine or reliable dates back to antiquity, and it still prevails. To this day, most courses in the philosophy of knowledge teach that knowledge is some form of justified, true belief, where ‘justified’ means designated as true (or at least ‘probable’) by reference to some authoritative source or touchstone of knowledge. Thus ‘how do we know. . . ?’ is transformed into ‘by what authority do we claim . . . ?’ The latter question is a chimera that may well have wasted more philosophers’ time and effort than any other idea. It converts the quest for truth into a quest for certainty (a feeling) or for endorsement (a social status). This misconception is called justificationism.
The opposing position – namely the recognition that there are no authoritative sources of knowledge, nor any reliable means of justifying ideas as being true or probable – is called fallibilism. To believers in the justified-true-belief theory of knowledge, this recognition is the occasion for despair or cynicism, because to them it means that knowledge is unattainable. But to those of us for whom creating knowledge means understanding better what is really there, and how it really behaves and why, fallibilism is part of the very means by which this is achieved. Fallibilists expect even their best and most fundamental explanations to contain misconceptions in addition to truth, and so they are predisposed to try to change them for the better. In contrast, the logic of justificationism is to seek (and typically, to believe that one has found) ways of securing ideas against change. Moreover, the logic of fallibilism is that one not only seeks to correct the misconceptions of the past, but hopes in the future to find and change mistaken ideas that no one today questions or finds problematic. So it is fallibilism, not mere rejection of authority, that is essential for the initiation of unlimited knowledge growth – the beginning of infinity.
Furthermore, in an interview with Robert Lawrence Kuhn, David Deutsch describes his theory of explanation from a scientific realists approach (compare with Bas van Fraassen's Constructive empiricism) and its relationship to truth and reality,
An explanation is a statement of what is there in reality. How it works and why, basically. But the important distinction is between a good explanation and a bad explanation. Explanations are two a penny, but good explanations are extremely hard to come by and this is what the growth of knowledge is actually about. A good explanation is one that is hard to vary while still explaining what it purports to explain. [...] Not all good explanations are reductionists... my basic principle that we should be looking for good explanations which I think is the foundation of scientific rationality implies that we must not have that [reductionistic] prejudice because if we do find an explanation that is on a higher level emergence say and we find that fundamental law at the higher level of emergence and it's a good explanation then it's simply irrational to reject it because it doesn't have the form which historically we have been taught is the one we should pursue.
False philosophy is not harmful in fact error is the standard state of human knowledge. We can expect to find error everywhere including in the theories that we most cherish as true. [...] Bad philosophy is philosophy whose effect is to close off the growth of knowledge in that field. [...] Logic positivism is a prime example of bad philosophy [Interview of A. J. Ayer with Bryan Magee about Logical Positivism].
I think all progress historically and today comes from the quest of good explanations, that is, explanations that are hard to vary while still accounting what they purport to account for. This principle not only does it explain what the criterion for success is in science where it leads to the principle of testability of theories, because a test constrains the explanation so that it's hard to vary, but it also applies outside physics in philosophy, in epistemology, in metaphysics, and so on. Draws a distinction between ideas that have a chance of making progress and ideas that have no chance of making progress.
The reality is that medical knowledge if considered to be science, as stated by Wenzel, is not dependent on authority or social groups, it is fallible and up for revision at any time. Claiming otherwise or to teach it without its critical method is to say medicine is not scientific. As Imre Lakatos said in his lecture on science and pseudoscience,
Many philosophers have tried to solve the problem of demarcation in the following terms: a statement constitutes knowledge if sufficiently many people believe it sufficiently strongly. But the history of thought shows us that many people were totally committed to absurd beliefs. If the strengths of beliefs were a hallmark of knowledge, we should have to rank some tales about demons, angels, devils, and of heaven and hell as knowledge. Scientists, on the other hand, are very sceptical even of their best theories. Newton’s is the most powerful theory science has yet produced, but Newton himself never believed that bodies attract each other at a distance. So no degree of commitment to beliefs makes them knowledge. Indeed, the hallmark of scientific behaviour is a certain scepticism even towards one’s most cherished theories. Blind commitment to a theory is not an intellectual virtue: it is an intellectual crime.
[...]
The cognitive value of a theory has nothing to do with its psychological influence on people’s minds. Belief, commitment, understanding are states of the human mind. But the objective, scientific value of a theory is independent of the human mind which creates it or understands it. Its scientific value depends only on what objective support these conjectures have in facts.
Addendum August 22, 2017:
It is important to keep in mind that the information in textbooks may be out of date by the time they get published. I would also like to add this quote from a former dean of Harvard Medical School, who in 1944 said:
Tweet to @jvrbntzHalf of what we are going to teach you is wrong, and half of it is right. Our problem is that we don't know which half is which.
No comments:
Post a Comment
1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.
2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).
3. You should mention anything you have learned from your target.
4. Only then are you permitted to say so much as a word of rebuttal or criticism.
Daniel Dennett, Intuition pumps and other tools for thinking.
Valid criticism is doing you a favor. - Carl Sagan