Science has historically established practices that ensure its integrity setting it apart from other practices. In his 2015 book The Invention of Science David Wootton writes,
In the course of the first century after the publication of Vesalius’s anatomy and Copernicus’s cosmology (both appeared in 1543) a set of values was slowly devised for how best to conduct the intellectual activity that we now call science: originality, priority, publication, and what we might call being bomb-proof: in other words, the ability to withstand hostile criticism, particularly criticism directed at matters of fact, came to be regarded as the preconditions of success. The result was a quite new type of intellectual culture: innovative, combative, competitive, but at the same time obsessed with accuracy. There are no a priori grounds for thinking that this is a good way to conduct intellectual life. It is simply a practical and effective one if your goal is the acquisition of new knowledge.
What killed alchemy was the insistence that experiments must be openly reported in publications which presented a clear account of what had happened, and they must then be replicated, preferably before independent witnesses. The alchemists had pursued a secret learning, convinced that only a few were fit to have knowledge of divine secrets and that the social order would collapse if gold ceased to be in short supply. Some parts of that learning could be taken over by the advocates of the new chemistry, but much of it had to be abandoned as incomprehensible and unreproducible. Esoteric knowledge was replaced by a new form of knowledge which depended both on publication and on public or semi-public performance. A closed society was replaced by an open one.
Alchemy was never a science, and there was no room for it to survive among those who had fully accepted the mentality of the new sciences. For they had something the alchemists did not: a critical community prepared to take nothing on trust.
The demise of alchemy provides further evidence, if further evidence were needed, that what marks out modern science is not the conduct of experiments (alchemists conducted plenty of experiments), but the formation of a critical community capable of assessing discoveries and replicating results. Alchemy, as a clandestine enterprise, could never develop a community of the right sort. Popper was right to think that science can flourish only in an open society.
If history and philosophy have something to teach us about scientific and democratic practices is that open criticism plays a major role in their endeavor. The lack of having this critical attitude, which includes replicating and testing other people's findings, will create an environment with more noise than signal, this has been pointed out by many including John Ioannidis (PMID: 26934549, PMID: 16060722). This brings me to the 2015 book Ending Medical Reversal by Prasad and Cifu who also criticize current medical practices, evidence-based medicine, and medical education. In their book Prasad and Cifu define medical reversal as follows,
We expect that medical therapy will change and evolve with time. Good treatments will replace bad ones, and then better ones will replace those. Antibiotics have replaced arsenic, and anesthesia has replaced a bullet held bracingly between the teeth. Recently, however, change has occurred in surprising ways. If you have followed the news about prostate cancer screening, mammography for women in their forties, hormone replacement, cholesterol-lowering medications, and stents for coronary-artery disease, you might think doctors cannot get anything straight. These common practices were not replaced by better therapies; they were found to be ineffective. In some cases, they did more harm than good. You might be worried that some medical practices are nothing more than fads. In some cases, you would be right.
We call this phenomenon “medical reversal.” Instead of the ideal, which is replacement of good medical practices by better ones, medical reversal occurs when a currently accepted therapy is overturned—found to be no better than the therapy it replaced. Now, you might argue that this is how science is supposed to proceed. In high school, we learned that the scientific method involves proposing a hypothesis and testing to see whether it is right. This is true. But what has happened in medicine is that the hypothesized treatment is often instituted in millions of people, and billions of dollars are spent, before adequate research is done. Not surprisingly, sometimes the research demonstrates that the hypothesis was incorrect and that the treatment, which is already being used, is ineffective or harmful.
We believe that reversal is the most important problem in medicine today. When we doctors flip-flop on our advice to patients, it usually is not because the treatments stopped working. It usually is not because someone discovered a harm no one had previously noticed. It is usually because the practice never worked—we were wrong all along. We promoted it before we had properly studied it. We knew it had some harms, sure, but we never thought it lacked benefits. This problem underlies people’s distrust of the medical establishment and is a very important reason that health-care costs are soaring without any improvement in people’s health.
Many people dismiss this phenomenon as the natural course of science: of course hypotheses turn out to be wrong, and we can only move forward through trial and error. Although this is certainly true in biomedical science—where there are false starts, good hypotheses that fail to live up to expectations—it is not the case in medicine. Medicine is the application of science. When a scientific theory is disproved, it should happen in a lab or in the equivalent place in clinical science, the controlled clinical trial. It should not be disproved in the world of clinical medicine, where millions of people may have already been exposed to an ineffective, or perhaps even harmful, treatment.
CAST taught us that even the most careful reasoning and the best scientific models do not guarantee an effective clinical treatment. What works in the lab, or on a computer, or in the head of the smartest researcher does not always work in a patient.
There's a lot more to the book which I will keep referencing as I do with other literature. Clinicians and researchers live in different worlds, but that doesn't mean clinicians do not encounter uncertainties and would not find probability useful in practice. In order to assess how each practice works understanding how uncertainty and probability apply to each setting is important. Clinicians are interested in the assessment and care of an individual patient while researchers use a sample from a population to assess one or multiple things. But they both still need to know what happens at the individual and the population level. Clinicians, like scientists, are affected by false positives and false negatives, use models and as said before all models are wrong. Both collect data, use measurement, with the help of prior experience and background knowledge draw inferences and may offer a course of action. It should not be strange that if both lack an open critical attitude they will also have a more noise less signal problem.Tweet to @jvrbntz