Risk calculator predicts mortality for PAD patients using EHR data

An automated risk calculator derived solely from variables in an electronic health record (EHR) accurately predicted five-year survival in a community-based cohort of patients with peripheral artery disease (PAD), researchers reported Dec. 1 in the Journal of the American Heart Association.

Clinical decision support (CDS) tools using EHRs have been developed and tested for cardiovascular risk calculation, according to the authors, but there haven’t been reports about similar EHR-derived prediction tools for PAD.

Mayo Clinic researcher Adelaide M. Arruda-Olson, MD, PhD, developed the model using 1,676 patients with clinically confirmed PAD from Olmsted County, Minnesota. Over five years of follow-up, 593 of the patients died, and the EHR tool predicted survival in the overall data set with a c-statistic of 0.76. A c-statistic of 1.0 represents a perfect model, while 0.50 means the model is no more accurate than a coin flip.

“This study used novel methodologic approaches including deployment of phenotyping algorithms to an EHR and a digitized health information system to acquire data elements to build a new prognostic model and automated individualized risk prediction tool for patients with PAD,” Arruda-Olson et al. wrote. “This automated informatics approach enabled creation of a robust prognostic model for patients with PAD with strong discriminatory power, including c‐statistic magnitudes comparable to those reported for the Framingham Heart Studies.”

Twenty-two variables were included in the prediction model, including age at diagnosis, sex, glomerular filtration rate, prior limb revascularization, other comorbidities and metrics from noninvasive lower extremity evaluations.

The researchers stratified patients by percentile into four risk groups: low (below 16th percentile), low-intermediate (16th to 50th percentile), intermediate-high (50th to 84th percentile) and high (above 84th percentile). Compared to the low-intermediate subgroup, which was used as the reference group, the low risk patients demonstrated a 65 percent decreased chance of five-year mortality. The intermediate-high and high risk groups, meanwhile, showed death risks 2.98-fold and 8.44-fold higher, respectively, than the reference group.

“As the data elements were retrieved from an EHR, which serves a single practice and community, it reflects usual clinical practice encompassing information from both inpatient and outpatient settings, and as such may be broadly generalizable to other healthcare systems and EHRs,” the authors wrote. “Risk calculators such as those described herein … will enable personalized and real‐time prognostication and thereby realize the vision of a learning healthcare system to support clinical decision‐making at the point of care.”

These tools could enable discussions about treatments to modify or lower risk, the authors said, and give clinicians more precise survival predictions to guide discussions and encourage patience adherence.

The authors acknowledged future studies will be needed for external validation of this approach. Another limitation of the study was the relatively homogenous racial makeup of the study population—94 percent of patients were white, possibly limiting the generalizability to other groups.  

In a related editorial, Mehdi H. Shishehbor, DO, MPH, PhD, and Peter P. Monteleone, MD, pointed out it’s difficult to choose which risk prediction models should be used for patients with multiple comorbidities. They asked if a PAD calculator should be prioritized over a cardiovascular risk calculator or one designed to measure the progression of chronic kidney disease.

“Where does it stop? Alarm fatigue is a real phenomenon, and its potential for precipitation of medical errors in all clinical settings has been robustly described,” the editorialists wrote. “Rigidity of CDS systems is another frequently cited concern in the field. Do clinicians reliant on a PAD tool neglect to evaluate and treat PAD in patients not identified by an imperfect tool?”

Nevertheless, Monteleone and Shishehbor said Arruda-Olson et al.’s work “is valuable because it represents a large step forward in multiple long journeys.”

“It is essential to have built a tool that identifies and risk stratifies a disease that is generally underrecognized and undertreated,” they wrote. “It is important to step boldly into creation of CDS tools and to tackle the power of complex informatics within our electronic health record, despite the technical and systems‐level concerns that exist for CDS in general.”

Monteleone and Shishehbor concluded by describing a future in which machine learning algorithms will be able to optimize patient care and consider the “whole patient,” not just give recommendations or predictions based on an ICD-10 code.

“In this not‐too‐distant future, ‘standard of care’ may represent evaluation of patients by machine learning algorithms and clinician assessment of that evaluation,” they wrote. “In this world, neural networks trained on medically optimized cases could make recommendations not driven solely by variables inputted into a calculator, but by image analysis of wounds or by risk optimization strategies we may have never conceived but that were hypothesized by unsupervised machine learning data analysis. None of this is science fiction anymore.”

""

Daniel joined TriMed’s Chicago editorial team in 2017 as a Cardiovascular Business writer. He previously worked as a writer for daily newspapers in North Dakota and Indiana.

Around the web

Several key trends were evident at the Radiological Society of North America 2024 meeting, including new CT and MR technology and evolving adoption of artificial intelligence.

Ron Blankstein, MD, professor of radiology, Harvard Medical School, explains the use of artificial intelligence to detect heart disease in non-cardiac CT exams.

Eleven medical societies have signed on to a consensus statement aimed at standardizing imaging for suspected cardiovascular infections.