Smart Support: Artificial Intelligence Will Help, Not Replace, Electrophysiologists

Artificial intelligence (AI)–assisted electrophysiology (EP) shows promise, but even its most ardent advocates aren’t ready for full-fledged endorsement—yet.   

AI revolution? 

A robot revolution is coming, predicted the Huffington Post in November. Citing Bureau of Labor Statistics data and an analysis by Ball State University’s Center for Business and Economic Research (CBER, online June 19, 2017), the article warned that nearly half of American jobs are “vulnerable to automation.” Such forecasts aren’t new or novel. Many came couched among counsel to prepare for a changing job market, while others have joined the debate about what machines should and shouldn’t be tasked with doing. 

Medicine hasn’t been immune to the debate, leading some clinicians to worry that AI—the same brain behind automation, robotics and deep learning—might someday replace them just as it’s predicted to do to workers in many manufacturing jobs. The concern has been boosted by studies demonstrating what AI can do. At the 2016 Radiological Society of North America annual meeting, for example, physicians described using IBM Watson to diagnose aortic stenosis and perform “cognitive peer review” to find practice variations. 

Last July, Stanford University computer programmers pitted an algorithm for identifying cardiac arrhythmias against the interpreting skills of six board-certified cardiologists. Thirty-thousand clips from various patients later, the technology had out-performed the physicians in both recall and precision (arXiv:1707.01836v1, online July 6, 2017). 

Only a few months later, Stanford University scientists announced they had developed an AI program that bested radiologists when both the machines and humans used chest X-rays to diagnose pneumonia (arXiv:1711.05225, online Nov. 14, 2017). 

Setting goals 

For all the enthusiasm when computer algorithms succeed, some say medical AI is being oversold. 

With the Man and Machine clinical trial, for example, Andreas Rillig, MD, a cardiologist at the Asklepios Klinik St. Georg in Hamburg, Germany, and colleagues found that robotic navigation (RN) for circumferential pulmonary vein isolation (CPVI) was noninferior to the gold standard of manual ablation (MN). Recurrence and procedure-related complication rates were similar for both cohorts, although the MN procedure times were significantly shorter (JACC Clin Electrophysiol 2017;3[8]:875-83). 

While the achievement was touted, discussion included the larger question of whether humans are setting the right goal for machines. In an editorial accompanying the trial publication, Bruce Lindsay, MD, an electrophysiologist at the Cleveland Clinic, suggested that less-than-superior results fall short of what the goal should be. “Although there are legitimate reasons why trials are designed to demonstrate noninferiority, the real objective of new and expensive technologies is to be superior. Why else would you buy them?” he wrote, adding that there’s little justification for tools that increase costs “without improving outcomes or saving time” (JACC Clin Electrophysiol 2017;3[8]:884-6). 

Mastering AI’s potential 

To reap the maximum value from AI, cardiologists should think of it as “assistive intelligence” and plan to use it for tasks that will make their jobs easier, suggests Mintu Turakhia, MD, MAS, executive director of Stanford’s Center for Digital Health and director of cardiac electrophysiology at the VA Palo Alto Healthcare System. The goal, he says, shouldn’t be to replace clinicians but rather to enable them “to do things more quickly, particularly tasks that are repetitive and require a fair amount of ‘in the head’ computation.” 

Similarly, Stanford University researchers Jonathan Chen, MD, PhD, and Steven Asch, MD, MPH, wrote in the New England Journal of Medicine that AI should be positioned as a tool for physicians, not as a replacement. To present the technology as anything else is to risk the “peak of inflated expectations,” they warned (2017;376[26]:2507-9). In a tweet, Asch said AI has “amazing potential, burdened by [its] own hype.”  

Chayakrit Krittanawong, MD, a researcher at the Icahn School of Medicine, Mount Sinai St. Luke’s Hospital, in New York City, has studied AI as a means to improve pacemaker programming. He sees AI as a tool that technology companies will use to “foster the evolution of precision medicine,” a move he thinks ultimately will be good for cardiologists.

While he agrees AI probably won’t replace physicians, Krittanawong and co-authors wrote in the Journal of the American College of Cardiology that “it is important that physicians know how to use AI sufficiently to generate their hypotheses, perform big data analytics, and optimize AI applications in clinical practices to bring on the era of precision [cardiovascular] medicine”  (2017;69[21]:2657-64).   

Krittanawong told CVB that he has reservations about the long-term prospects of AI in some treatment areas. “AI programmed in a pacemaker could cause problems to the patients,” he wrote in an email. One better use of the technology, he suggested, might include AI programmed into a pacemaker to self-interrogate and then send summaries to clinicians, who would then decide how to act on the data.  

He thinks of AI as “a black box that might seem hard to prove at this moment.” It will be up to healthcare and technology leaders to master the technology, ensuring “multiple validations in different populations, algorithms and types of datasets.” More important, he says, there is a need for guidelines, established “either by professional societies or the government.” 

Still, Krittanawong is optimistic about AI’s potential. “I believe that in the future using AI in novel areas of investigation in EP, such as genomics, metagenomics, gene- and cell-targeted therapies, AI-guided ablation or EP studies, [will be] the keys of success for preemptive treatment and primary or secondary prevention—not only in the EP field but the healthcare industry,” he says. 

Patrice Desvigne-Nickens, MD, medical officer of the heart failure and arrhythmia branch of National Heart, Lung and Blood Institute’s cardiovascular sciences division, says it’s too early for a “full embrace” of AI but that AI’s time will come after a few hurdles are overcome. “AI has already demonstrated respectable levels of competence,” she says, “and these systems will only improve over time—and this improvement is likely to be very rapid.” 

One challenge may diminish on its own, as more physicians gethands on AI, she suggests. “These systems still are not terribly user friendly for the uninitiated today, but the next generation of cardiologists and EPs are likely to be very familiar with [them],” she predicts. She’d like to see cardiologists working together to “develop shared knowledge about potential AI benefits and needs.” 

Desvigne-Nickens suggests an analogy to Netflix, the popular entertainment subscription service. “Most people know that Netflix gets pretty good at recommending programming to repeat viewers,” she explains. “The more you use it, the better it gets. AI will be the same way. … Over time, I think the next generation of EPs will become very comfortable [with AI], and they may not even need special training in AI.”  

Around the web

Ron Blankstein, MD, professor of radiology, Harvard Medical School, explains the use of artificial intelligence to detect heart disease in non-cardiac CT exams.

Eleven medical societies have signed on to a consensus statement aimed at standardizing imaging for suspected cardiovascular infections.

Kate Hanneman, MD, explains why many vendors and hospitals want to lower radiology's impact on the environment. "Taking steps to reduce the carbon footprint in healthcare isn’t just an opportunity," she said. "It’s also a responsibility."