TAVR or SAVR? ChatGPT could help cardiologists decide
ChatGPT and other large language models can help heart teams (HTs) identify the best treatment strategies for patients with severe aortic stenosis (AS), according to new research published in EuroIntervention.[1]
“Current guidelines mandate the inclusion of HTs in order to make tailored decisions regarding the treatment of patients with AS, including options such as conservative management, surgical intervention or percutaneous treatment,” wrote first author Adil Salihu, MD, a cardiologist with Lausanne University Hospital in Switzerland, and colleagues. “However, in practical implementation, coordinating and bringing together the diverse expertise of the HT can be challenging and can occasionally result in incomplete attendance at meetings. Moreover, the high volume of patients and time constraints may hinder the detailed evaluation of each patient’s clinical data. These factors can potentially limit the effectiveness of HT meetings, resulting in suboptimal patient management.”
Salihu et al. aimed to learn if artificial intelligence (AI) could make a difference. The group gave data on 150 separate AS patients to a HT that included interventional cardiologists, imaging specialists, cardiac surgeons, a vascular surgeon, an anesthesiologist and a geriatrician. The HT then determined the most appropriate management strategy for each patient: transcatheter aortic valve replacement (TAVR), surgical aortic valve replacement (SAVR) or medical therapy.
The authors then fed data from those same 150 patients to ChatGPT version 4.0 (GPT-4), a large language model designed and developed by OpenAI. GPT-4 used advanced AI algorithms to make its own treatment recommendation for each patient, choosing the most appropriate strategy between TAVR, SAVR and medical therapy. It focused on 14 key patient variables to make these decisions.
Overall, ChatGPT agreed with the HT 77% of the time. It was the most effective when evaluating patients who were ultimately recommended for TAVR, delivering an accuracy of 90%. ChatGPT was less accurate when evaluating SAVR and medical therapy patients; its accuracy was 65% for both groups.
“This study suggests that AI, specifically ChatGPT-4, could potentially play a role in the decision-making process within an HT,” the authors wrote. “This lays the ground for future larger studies with a multi-center and prospective design. These studies should aim to comprehensively examine the factors contributing to the occasional divergences between ChatGPT’s evaluations and the HT’s decisions. Additionally, they should assess the patient outcomes associated with instances where such discrepancies are present. Despite the good performance observed, it is crucial to remember that AI tools are not intended to replace clinicians, but rather to support them in their decision-making process.”
One positive takeaway from the large language model’s performance was the fact that it did not recommend any medical treatment patients for surgery, “which confirms its ability to identify patients at high or prohibitive surgical risk.” Not everything about its assessments, however, was positive.
“The algorithm still suggested TAVR for seven patients assigned to medical treatment, illustrating the difficulty and uncertainty surrounding such a decision,” the authors wrote. “Usually, the decision to choose medical management is mainly related to excessive frailty, the number of comorbidities, or expected limited life expectancy, suggesting the futility of an invasive procedure. However, this final decision is complex and potentially not fully represented by the 14 variables used in the present study.”
Reviewing their findings, Salihu et al. emphasized that these advanced AI models could potentially serve as a “failsafe” when managing care for patients with severe AS. The HT would still ultimately make any decisions, but ChatGPT or a similar large language model may provide value by running in the background in case the HT makes some sort of error with its judgement.
“AI technologies have the potential to revolutionize healthcare, making it more efficient, personalized, and patient-centered,” the group concluded. “Nevertheless, in order to fully achieve this potential, it is essential to tackle the challenges related to the interpretability, legal implications and ethical considerations of AI use in healthcare. As AI continues to evolve, we can anticipate an increasingly prominent role of these tools, with the ultimate goal of pushing the boundaries of what can be achieved in patient care.”
Click here to read the full analysis.