More than ⅔ of cardiovascular RCT reports skew language to reflect better outcomes
More than two-thirds of CV randomized clinical trial (RCT) reports positively spin some aspect of the article to reflect favorable results that aren’t necessarily there, according to a study published in the May edition of JAMA Network Open.
If spin—or the manipulation of language to mislead readers from the likely truth of results, as first author Muhammad Shahzeb Khan, MD, and colleagues put it—is present in cardiovascular RCT reports, it could be a major barrier to the integrity of such studies. RCTs have historically generated the highest certainty of evidence for interventions and often form the basis of clinical guidelines, so it’s imperative they’re objective.
“Evidence-based practice depends on accurate presentation of a trial’s results,” Khan et al. wrote in JAMA. “Journal editors allow authors of scientific articles broad latitude in the use of language when reporting their study, which may subconsciously or consciously shape the impression of their results for readers.”
Khan and his team searched the MEDLINE database for parallel-group RCTs that reported statistically nonsignificant primary outcomes between 2015 and 2017. The idea behind this approach, which was established nearly a decade ago in another JAMA study, is that interpretation of statistically nonsignificant results is more likely to be subject to prior beliefs of effectiveness, leaving the report vulnerable to bias.
The researchers pooled 93 RCT reports from six of the highest-impact journals in the field, including the New England Journal of Medicine, The Lancet, JAMA, the European Heart Journal, Circulation and the Journal of the American College of Cardiology. Data were extracted and verified by two independent investigators.
The team identified spin in 57% of abstracts and 67% of main texts in the published articles. Ten reports had spin in the title, 35 reports had spin in the results section and 50 reports had spin in the conclusions. Considering abstracts alone, spin was most commonly observed in the results (41%) and conclusions (48%) sections.
Khan et al. said authors who included spin in their results and discussion sections tended to focus on statistically significant secondary endpoints, within-group analyses and subgroup analyses to draw attention away from other results. They also used spin to report a lack of harm from safety data without highlighting the statistically nonsignificant efficacy result and focus on the effectiveness of both treatment arms when statistically significant changes were seen from baseline in both groups.
The researchers wrote they could only speculate about why authors chose to use positive spin, but said it might have something to do with financial incentives and career advancement. A 2014 study on publication bias reported positive results were more likely to be found in higher-impact journals, so authors might also accentuate the positive for a better shot at a notable publication.
“These observations have significant implications for the integrity of clinical science, the translation of clinical evidence at the bedside, peer review and the rate of medical progress,” Khan and co-authors wrote. “Manipulation of language to distort findings may also lead to further public distrust in science.”
In a related editorial, Stephan D. Fihn, MD, MPH, chief of general internal medicine at the University of Washington and deputy editor of JAMA Network Open, said his staff runs into these types of issues all the time. This month marks the journal’s first anniversary, and to date it’s screened 2,400 manuscripts and published 450 articles. Fihn said he and his colleagues meet twice a week to review submissions, and “almost invariably” there’s at least one manuscript with an unacceptable level of spin.
Fihn wrote authors’ inclination to report their work in the most favorable light “is to some degree understandable,” given the years of work they’ve put into a singular project and widespread journal bias toward positive results. He said it might also reflect confirmation bias that causes us to accept data that confirm our preconceptions and reject data that don’t.
“Nevertheless, failure to maintain a critical and dispassionate perspective is a disservice to the research community, funding agencies, the practicing physicians who must decide which treatments to use and the lay public,” the editorialist wrote. “In an era in which truth is seen as a scarce commodity, dedication to fair and responsible reporting of scientific results is essential to preserving trust in the clinical research enterprise.”