Study: New algorithm can accurately detect non-routine radiology results
Paras Lakhani, MD, and Curtis P. Langlotz, MD, PhD, from the radiology department at the Hospital of the University of Pennsylvania in Philadelphia evaluated the rule-based algorithm, which was developed to study radiology reports for frequented words and phrases that exhibited proper report documentation.
According to the authors, hospitals must provide proper documentation of non-routine communications for accreditation purposes.
“Since there is not yet standard language that radiologists must use to report such findings,” wrote Lakhani and Langlotz, “providing such evidence can be difficult, often requiring an imprecise and laborious manual review of a large body of reports that impedes the monitoring of such communications for quality control purposes.”
The goal of the study was to “devise an automated method to search the text of radiology reports and identify those reports that contain documentation of communications with the referring provider.”
An algorithm using structured query language (SQL) was designed by the researchers to detect non-routine results documentation in radiology reports
The researchers reviewed 2.3 million free-text reports archived between 1997 and 2005 in the Hospital of the University of Pennsylvania’s RIS/PACS for phrases and wording that indicated the documentation of non-routine results.
Due to the large number of verb possibilities that radiologists could have used to document results, the researchers selected a small sample of 50 reports to evaluate “specific phraseology” that was most common.
The researchers found three key elements frequented in the documentation of communications; a verb such as “notify” or “informed;” a noun relating to the person receiving communications such as a “doctor” or “nurse;” and a noun that referred to the diagnostic study.
From this, a database ranked the 15 most recurrent verbs, such as “reviewed,” “discussed,” “reported” and “informed,” used by physicians when documenting non-routine reporting according to their frequency and then applied each to the algorithm.
“For every verb … we selected 50 consecutive radiology reports containing those verbs and analyzed how those verbs were used in a phrase or sentence,” wrote Lakhani and Langlotz.
Using the same method, a list of the most common notification recipients was created as part of the algorithm schema.
Of a total of 2.3 million radiology reports, the query algorithm selected 5.1 percent, of which 200 were randomly selected and analyzed. According to the Lakhani and Langlotz, each report was manually reviewed for testing and validation purposes.
Of the 200 reports, 194 offered some form of documentation of communications—3 percent did not—resulting in a precision rate of 97 percent.
Of the remaining 94.9 percent that were excluded from the algorithmic selection, 2,000 were analyzed. Results showed a query recall of 98.2 percent with two of 2,000 illustrating a documentation of communication.
The authors cautioned that while figures could be adapted by other facilities, data may be less accurate, due to the fact that the algorithmic equation was derived using language specific to that at the Hospital at the University of Pennsylvania.
Lakhani and Langlotz suggested that further research could apply the algorithm to the facility’s entire radiology database to track metrics such as frequencies, study codes and inpatient and outpatient services.