Synthesizing Evidence - Shifting the Focus From Individual Studies - TopicsExpress



          

Synthesizing Evidence - Shifting the Focus From Individual Studies to the Body of Evidence The research enterprise behaves as if a single study could provide the ultimate answer to a clinical question. Researchers offer overoptimistic sample-size calculations and other design features to convince funders and institutional review boards that their study will provide the answer. Medical journals and the popular media highlight single studies, too often without regard to similar studies. Only a small proportion of trials have been synthesized into systematic reviews. Researchers with the Cochrane Centre have advocated for the results of individual studies to be placed in the context of the totality of evidence for the last 15 years, with limited success. With more empirical evidence supporting this view, we offer 5 reasons to shift the collective focus from single studies to the accumulating body of evidence. 1. Calibrate Confidence In The Estimates Based On The Consistency With Other Studies. Replication is a key step in science. Yet funders, researchers, and journals often dismiss studies that replicate published findings. Replicable findings reassure clinicians about the integrity of research and reduce anxiety about publication bias and false-positive findings. Ioannidis and Trikalinos have argued that most research yields false conclusions, but replicable findings are more likely to be true. Evaluating a single study does not offer the opportunity of assessing consistency. 2. Avoid Premature Closure About The Magnitude Of The Effect Before The Estimate Has Moderated Through Repeated And Independent Evaluation. Extreme estimates of effect may be published early in the chain of evidence, the so-called Proteus phenomenon. The most favorable result in support of a specific association is more likely to appear earlier than the least favorable one. Subsequently, clinical and policy decisions could be made based on an early inaccurate or exaggerated estimate. This phenomenon is not apparent when a single study is appraised in isolation. 3. Calibrate Confidence In The Estimates Based On The True Magnitude Of Precision. When studies are combined in meta-analysis, the inconsistency in results among studies adds a level of uncertainty. This uncertainty is incorporated into a wider confidence interval around the meta-analytic estimate. Therefore, when results are inconsistent, the relative precision around the estimate of effect from a single study will overestimate the precision that the study contributes to the accumulating evidence. To gain a correct sense of the precision of the estimates of effect, the body of evidence should be considered. 4. Improve The Usefulness Of Results To Clinical Decision Makers. Studies often adopt design features that help yield seemingly definitive findings, such as using composite end points, surrogate end points, unfair comparisons (or inactive comparators to ensure a large effect appreciated in a smaller or briefer study), and selecting patients at high risk of the outcome. These strategies, along with unrealistic and unnecessary sample size calculations, offer the promise of a definitive result. Yet these same strategies require evidence users to make assumptions and extrapolations that muddle the interpretation and applicability of results. A larger body of evidence consisting of multiple studies minimizes the need for these design features. 5. Prevent Premature Dismissal Of A Potentially Effective Intervention Owing To Concerns About Nonsignificant Results. Some observers claim that the conduct of small underpowered studies is unethical because those studies have large random error and may produce misleading results; moreover, such studies waste resources best used to conduct adequately powered trials. Small negative trials may lead to premature dismissal of an effective intervention and might not be published, whereas small positive trials may overestimate the effect of an intervention and may have a higher likelihood of publication. Simulation studies have undermined the validity of many of these claims, finding that a body of evidence composed of moderately powered trials may offer a lower false-positive rate compared with a single well-powered trial, assuming a modicum of publication bias. The most serious consequence of the insistence on well-powered trials is that many moderate-sized trials that would contribute to the body of knowledge are never undertaken. Murad M, Montori VM. Synthesizing Evidence: Shifting the Focus From Individual Studies to the Body of Evidence. JAMA 2013;309(21):2217-2218. jama.jamanetwork/article.aspx?articleid=1693898
Posted on: Sun, 09 Jun 2013 14:42:42 +0000

Trending Topics



Recently Viewed Topics




© 2015