Discrepancies in reporting of trial results raise questions on accuracy

An analysis of 96 clinical trials found almost all at least one discrepancy between the trial results published in journals and the results published on ClinicalTrials.gov, the public clinical trial registry. The findings were published in the March 12 edition of the JAMA. 

Harlan Krumholz, MD, and Joseph Ross, MD, both of the Yale School of Medicine in New Haven, Conn., and colleagues did a cross-sectional analysis of clinical trials published between 2010 and 2011 in top journals. The results of those trials were also published on ClinicalTrials.gov. The researchers compared information that was reported in both sources, which included characteristics of the participants, trial intervention and primary and secondary efficacy endpoints and results. They considered results in agreement if the endpoint, time of ascertainment and measurement scale all matched.

The results from the 96 studies were published on both ClinicalTrials.gov and in 19 top journals. The majority of trials (70 percent) were funded by industry and the most common conditions under investigation were cardiovascular disease, diabetes and hyperlipidemia; cancer; and infectious disease. While 93 to 100 percent of trials reported cohort, intervention and information about efficacy endpoints in both sources, 93 of 96 trials had at least one discrepancy between the two sets of published results or trial information.

Of the trials that reported each cohort characteristic and information on interventions, the rate of discordance ranged from 2 percent to 22 percent and was highest for completion rate and trial intervention. Descriptions of dosages, frequencies or length of intervention tended to differ between the two sets of published results. Only six of the discordant results changed the interpretation of the trial.

There were 132 endpoints discussed in both sets of published data, and there were discrepancies with 16 percent of them. The authors noted these differences could have been due to reporting errors, typographical errors and changes that occurred during peer review. In cases where only one source reported an endpoint, the differences could have been due to space limitations or intentional reporting of more favorable results.

"This study raises serious questions about the accuracy of results reporting in both clinical trial registries and publications, and the importance of consistent presentation of accurate results," Ross said in a release.

 

Kim Carollo,

Contributor

Around the web

Several key trends were evident at the Radiological Society of North America 2024 meeting, including new CT and MR technology and evolving adoption of artificial intelligence.

Ron Blankstein, MD, professor of radiology, Harvard Medical School, explains the use of artificial intelligence to detect heart disease in non-cardiac CT exams.

Eleven medical societies have signed on to a consensus statement aimed at standardizing imaging for suspected cardiovascular infections.