5 key criteria for AI prediction models in cardiology

Prediction models based on artificial intelligence (AI) could potentially transform cardiology forever, identifying high-risk patients early enough that they can receive the care they need before it is too late.

According to a new analysis in European Heart Journal, however, AI-based prediction models can only provide significant value if they include certain criteria.[1]

“In the cardiovascular health literature, analytical AI techniques are frequently used for the development of prediction models,” wrote contributing author Maarten van Smeden, a professor at UMC Utrecht in the Netherlands, and colleagues. “Despite the great potential of AI-based prediction models for application in the field of cardiovascular health, only few prediction models have so far shown their usefulness in clinical care. To improve the chances of clinical implementation of AI-based prediction models and thus make impact on cardiovascular health, we must hold their development and validation to high scientific standards.”

With this push toward higher standards in mind, the group listed five “critical quality criteria” that clinicians should look for in every AI-based prediction model. These criteria represent the minimum requirements for any advanced algorithm being designed to help identify a certain subset of patients; missing even one component could be enough to sink an AI model’s potential.

These are the group’s five criteria:

1. Complete, transparent reporting

Complete and transparent reporting is necessary to examine both how the AI model was developed and how it performs. This can also give more credibility to the model, the authors noted, because it helps other interested parties replicate and reproduce the work themselves.

“Systematic reviews have consistently shown that the reporting of prediction models, including those that are based on AI, is often poor,” the group wrote. “Complete reporting should include the detailed description of all steps of the modeling process, including all data preparation steps, all model selection, tuning, recalibration, testing steps, and all results from internal and external validation procedures.”

Relevant frameworks—CODE-EHR and TRIPOD, for example—also play an important role. The group added that an update to the TRIPOD guidelines is “expected soon.”

2. A clear intended use

“The development of any AI-based prediction model should be motivated by a clearly defined clinical problem for which the AI prediction model could serve as a solution,” the authors explained. “The opportunities and possible pitfalls of a new AI-based model will only become evident if the intended use of the model, including where and how it should be positioned in the clinical workflow, is made explicit.”

3. Thorough validation

When developing a new prediction model, internal validation (using the model on the exact same population, but different individuals) and external validation (using the model on a different population) are two crucial steps.

“It should be noted that one external validation may not be sufficient to provide a complete picture of the heterogeneity of predictive performance, and therefore, all claims on model to be ‘validated’ should be viewed with some skepticism,” the authors wrote. “Good predictive performance also does not prove that the model will have a beneficial influence on medical decision-making when the model is used in a healthcare setting. For this, decision curve analysis, (early) health technology assessments, and impact studies (e.g. via randomized clinical trials) can generate valuable information on the clinical benefit and risks of an AI-based prediction model.”

4. An adequate sample size

Researchers must work with large sample sizes to ensure the “robust development” and “accurate validation” of their prediction model are both possible.

5. Open data and software

Making an AI model’s data and software publicly available ensure other interested parties can have the information they need to fully understand and appraise its value. Some research teams may be hesitant to be so accommodating, but the authors said they “warn against the tendency of researchers to not share code or data.”  

The group noted that a full breakdown on the necessity of data sharing was published in European Heart Journal back in 2017.[2]

Michael Walter
Michael Walter, Managing Editor

Michael has more than 16 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Around the web

Eleven medical societies have signed on to a consensus statement aimed at standardizing imaging for suspected cardiovascular infections.

Kate Hanneman, MD, explains why many vendors and hospitals want to lower radiology's impact on the environment. "Taking steps to reduce the carbon footprint in healthcare isn’t just an opportunity," she said. "It’s also a responsibility."

Philips introduced a new CT system at ECR aimed at the rapidly growing cardiac CT market, incorporating numerous AI features to optimize workflow and image quality.

Trimed Popup
Trimed Popup