Jun 1, 2017

Uncertainty in observational studies

The American Society of Clinical Oncology has put out a statement about the potential of observational studies to inform clinical decision making (PMID: 28358653). Among other things the statement does a good job in highlighting the complementary roles and limitations of observational studies and randomized control trials. Understanding the nuances of research methodology is important when drawing conclusions from the scientific literature and I would like to quote a few excerpts from the statement in this blog. The following sentence, in my opinion, is one of the most important highlights of the article because it stresses the move away from the hierarchy of study design to considering the quality of evidence in the entire domain:

...the methodologies used to design clinical practice guidelines have been evolving from evidence hierarchies that are based on study design toward those that characterize the quality of the entire body of evidence.

The statement also lists factors that influence the quality of the evidence in observational studies:

Factors that contribute to study quality include providing an a priori plan for data analysis; assessing the strengths and weaknesses of existing databases; using an appropriate study design and risk adjustment; accounting for missing data and possible ascertainment or selection biases; analyzing the heterogeneity of both population and treatment effects; validating scales and tests; and using sensitivity analyses to determine the impact of key assumptions.8,11,12 Another way to conceptualize quality is in terms of the accuracy of the results (eg, did the study minimize the risk of bias, loss of data, and error that are potentially inherent in data capture from routine clinical practice?) and the precision of the results (eg, was the study sufficiently powered and clinically meaningful, and were the comparisons theoretically balanced and valid?).

Why use observational studies?

Developments in research and policy have led to an expanded role for observational research in clinical decision making. One factor is the lack of evidence from high-quality prospective interventional studies. Only 3% to 5% of patients with cancer participate in RCTs, and these patients are younger and fitter than the average patients; thus, results may not be generalizable to the patients seen in many settings and practices.13 The consequence is that clinicians lack information on how best to treat the large number of patients who might not have qualified for prospective studies because of confounders and comorbidities. Another limit on trial participation is cost (both financial as well as time and effort). Observational research provides a less expensive approach to fill gaps in knowledge and capture data on everyday clinical practice, especially if the data are representative of national, regional, vulnerable, or other specialized populations.2,14

Table 1 provides the strengths and weaknesses of these two types of research. How each of these factors affects the evidence requires more exploration.


Compared to other types of study designs, such as observational research studies, RCTs are well-known to have better control of reducing bias. When evaluating the evidence in observational studies it is important to consider how handling of the data and how bias may affect the study's results and conclusion.

Observational studies have different limitations than RCTs. They are at greater risk of bias, and attention needs to be given to the design and analysis of these studies to ensure that they are robust enough to guide clinical practice. The types of biases that can affect observational research include: (1) selection bias—the result of inadvertent or intentional differences in selection of patients for treatment when multiple treatment approaches are being compared, (2) performance bias—the result of differences in adherence, (3) detection bias—the result of differential assessment of outcomes, and (4) attrition bias—the result of differences in the groups that withdraw from a study. In addition, whether an observational study is prospective or retrospective may influence the strength of the evidence, with the assessment of causation only possible in prospective studies.

The quality of data, including its completeness and accuracy, is also important to consider when determining whether an observational study should inform clinical practice (ie, the principle of garbage in, garbage out). Data collected for clinical and administrative purposes are often repurposed for research and are not as tightly controlled as RCT data.2,49,50 Missing data are common, which can be particularly problematic if they are not missing at random.10,51 Moreover, key confounders that can influence relationships being studied may be unavailable in existing data sets. EHRs, for example, may not completely capture patients’ prior surgeries, family history, travel history, intravenous drug use, and disease onset.50 An added variable, which is difficult to quantify, is the expertise of the staff entering data into big data sets. In large hospital systems, a broad range of staff members enter data relating to demographics, fiscal status, toxicity, treatment adherence, and so on. This leads to variable data quality, which can easily confound the interpretation of study results on the basis of these data.

Readers of the scientific literature have different level of expertise, live in different regions, have different resources so I think it is their responsibility to consider not only the factors in their practice but also how features of study designs affect their decision making. Exploring the strength and weaknesses of study designs and how they influence results and conclusions helps in understanding one of the most important sources of uncertainty in scientific research and informed clinical decision making. Han et al. have been studying uncertainty in medicine, their diagram below is a great illustration of their framework.


Han, et al. PMCID: PMC3146626

No comments:

Post a Comment

1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.
2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).
3. You should mention anything you have learned from your target.
4. Only then are you permitted to say so much as a word of rebuttal or criticism.
Daniel Dennett, Intuition pumps and other tools for thinking.

Valid criticism is doing you a favor. - Carl Sagan