There's a lot of misunderstandings and misapplication of statistics, science, and decision making. It takes a lot of reading and evaluation in order to have a better understanding of difficult concepts and practices. This excerpt, from an article by Blume and Peipert, highlights how to consider the evidence for clinical practice:
The natural tendency, we believe, is for scientists and researchers to want to measure and summarize evidence. Hypothesis testing puts too much emphasis on a single statistically significant study, without regard to costs and benefits of the therapy under consideration. Moreover, published medical research reports often provide no firm evidence for decision making. Each report contributes incrementally to an existing body of knowledge. Increasing recognition of this fact is reflected in the growth of formal methods of research synthesis, including presentation of an updated meta-analysis in the discussion section of original research reports. When presented in this manner, prior evidence is based on results of reports addressing the same issue, and new data are added to the body of evidence. All forms of evidence (animal studies, different epidemiologic study designs, etc.) should be considered as one weighs the evidence, and this is not done in the hypothesis-testing framework. The discussion section of a research report should put the results in the context of other evidence in the medical literature to arrive at a logical conclusion.
In our opinion, conclusions and recommendations for decision making should not be based on results of a hypothesis test alone. Evidence for clinical practice should be based on all available evidence, the strength of the association, the precision of this estimate, potential public health benefits (and harms), and economic considerations. The P-value plays an important role in this respect and that is why it will just not disappear, despite it deficiencies. Science needs a way to measure and summarize the strength of the evidence in data objectively. Without a better option, the P-value is here to stay; however, we can avoid some of the problems associated with it by presenting effect sizes and CIs for the effect under investigation.
The arbitrary dichotomies of research findings as statistically significant and insignificant, which result from the pure hypothesis-testing approach, are often not helpful scientifically and can lead to problems. Researchers want to measure and summarize the strength of evidence in the data. This is what P-values are used for, not for making decisions. And although they may not be the best measure available, they are the standard of statistical care at this time. [...] New research findings must be put into the context of existing knowledge. Medical evidence for decision making should rely on a synthesis of existing research studies, and the contribution of the new data presented to the current evidence in the literature.
This echoes Guyatt and Djulbegovic's statement about evidence and EBM practice:
The basis for the first EBM epistemological principle is that not all evidence is created equal, and that the practice of medicine should be based on the best available evidence. The second principle endorses the philosophical view that the pursuit of truth is best accomplished by evaluating the totality of the evidence, and not selecting evidence that favours a particular claim.
How much of the evidence do you base your practice on and do you know its quality?Tweet to @jvrbntz
Post a Comment
1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.
2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).
3. You should mention anything you have learned from your target.
4. Only then are you permitted to say so much as a word of rebuttal or criticism.
Daniel Dennett, Intuition pumps and other tools for thinking.
Valid criticism is doing you a favor. - Carl Sagan