Wireless Worries: Hack-proofing Implantable Devices
The wireless technology that allows implantable medical devices to assist people with heart disease also is a potential source of some of their greatest security risks. Innovators seek new schemes to prevent malicious hacking, and while the FDA tries to keep up, some question whether the agency’s surveillance efforts are enough.
Concerns swirl about computer-related security breaches caused by viruses, hackers and lost or stolen protected health information. But security vulnerabilities also extend to computers embedded in devices such as implantable cardioverter-defibrillators (ICDs). ICDs benefit many, but they potentially are vulnerable to security breaches that could compromise their performance as well as patient safety and privacy.
This security weakness prompted former Vice President Dick Cheney to modify his ICD for fear of an attack several years ago. He revealed last fall that he instructed his physician to disable the device’s wireless function to prevent hackers from interfering with it and causing a fatal shock.
Pinpointing vulnerabilities
Cheney’s concerns may not be unfounded. In 2010, the U.S. Government Accountability Office warned of potential vulnerabilities in ICDs and other implantable devices and recommended the FDA expand its focus to include risks from intentional threats. In response, the FDA issued a final guidance to manufacturers that highlighted and discussed radio frequency wireless technology considerations affecting safe and effective use of such devices.
The guidance specifically examined considerations for design, testing and use of ICDs and other devices and described design process documentation and objective evidence the agency expects in premarket submissions. The central message of the guidance was the need for strong risk management strategies, including verification and validation testing.
While manufacturers are asked to beef up their security efforts, researchers continue to uncover new risks.
Case in point
Recent research has shown that analog sensors within ICDs and pacemakers, in particular, are vulnerable to tampering. For instance, an international team demonstrated that they could induce an erratic heart beat at a close distance with radio frequency electromagnetic waves. A false signal, thus, could trigger unnecessary defibrillation shocks.
The team conducted experiments under three conditions: proof-of-concept in free air, tests in a calibrated saline solution and tests in devices implanted into an artificial cadaver. For the preparations of that last test, physicians carried out the implantation procedure to ensure that the leads were put in place properly. The tests were then conducted with saline solution flowing through the circulatory system of the artificial cadaver, according to Denis Foo Kune, PhD, a postdoctoral researcher at the University of Michigan Computer Science and Engineering Division’s Computer and Privacy Research Group in Ann Arbor, who participated on the team.
“We were surprised that the assumption of trustworthiness of the analog sensing data has gone unchallenged for a long time. We also were surprised how well our amplitude modulation technique worked on common electronics. The interference rejection was much better on the cardiac devices, forcing us to get very close to the device,” says Kune.
While Kune says there is no known case where a hacker has altered an implantable cardiac device this way, it could be possible as technology advances. “At this point, it would take a large amount of power to carry out the attack from an appreciable distance,” says Kune. “However, as technology advances it is possible that this attack would become more feasible. That's why we think that we have to revisit the assumption that we can blindly trust the sensing input.”
Personalized authentication
As these vulnerabilities come to light, researchers are exploring ways of hack-proofing cardiac devices. Kune and others are looking at the trustworthiness of using physiological signals like heart beats to exchange keys between devices on a patient's body.
One project that uses heart beat data shows promise. Collaborating with RSA Laboratories in Cambridge, Mass., researchers at Houston-based Rice University developed the first electrocardiogram (ECG)-based authentication scheme, called “heart2heart” (H2H). It distinguishes ECG signals from adversarial ones in a rigorous statistical model with minimal false positives.
“My group was very much interested in the security problem that has to do with small hardware that is really power constrained,” says Farinaz Koushanfar, PhD, director of the adaptive computing and embedded systems lab at Rice, noting that resource constraints in implantable devices forbid the use of heavy cryptographic or signal-processing modules with high energy consumption.
Other efforts in the past attempted to use an ECG-based authentication scheme for devices, but those methods were shown that they could be broken, says Koushanfar. “Our work was trying to design something very secure, with no weaknesses that could use a protocol that is implementable.”
Her team specifically looked at emergency scenarios when medical personnel are challenged to access a device due to lack of pre-existing keys. H2H allows access to a patient’s medical device using a 16-bit microprocessor, but only when it has direct contact with a patient’s body, which they call “touch-to-access.”
Most security methods require changes on implantable device hardware. In H2H, that is done as a software patch update, says graduate student Masoud Rostami. As such, it is a less expensive way to bolster security. “It can have a minimal memory and power footprint,” Rostami says.
H2H satisfies the lightweight implementation requirements and noise margins for reliable authentication. Also, false positives are very low. A programmer with skin access would fail on average one in every 10,000 attempts and it would fail twice consecutively at most once in every 1 million attempts. The authentication process takes about 15 to 20 seconds, says Koushanfar.
Industry also is taking steps to balance security needs with therapeutic software and wireless communication technologies. “In recent years, we have seen lots of good efforts from the manufacturers to apply secure coding practices, giving me hope that we will soon have hardened platforms," Kune says.
FDA as watchdog
The FDA has stepped up efforts to prevent threats such as hacking and other security risks, but one analysis suggests the agency’s postmarket surveillance systems may not be up to the task. An examination of FDA records found that the agency’s classification of postmarket events may be inadequate to collect security or privacy-related incidents (PLoS One online July 19, 2012).
A research team led by Daniel Kramer, MD, of Harvard Medical School in Boston, scoured weekly FDA enforcement reports, device recall reports and adverse event reports. Of the total 1,845 recalls found, 1.7 percent of recalled devices were capable of wireless connectivity. Despite the small percentage, the devices represented a disproportionate number and severity of recalls. Of all recalls, only one was directly attributed to a security failure.
While on the surface, the results may appear reassuring about security issues, they simply may reflect shortcomings in the FDA’s postmarket surveillance systems, the authors cautioned. They questioned the databases’ ability to detect security signals and suggested more specific categories for security-related events and clearer instruction on correcting security problems.
As more risks are uncovered and as innovators scramble to close those security gaps, it remains to be seen if FDA can fine-tune its surveillance process to stay ahead of the hackers.