JAMA feature: Adding stroke severity improves accuracy of hospital rankings
“Various stakeholders—not only clinicians, patients and organizations that are concerned about proper profiling of hospitals and proper care of stroke patients— will be able to have this new information to understand the impact of not including adjustment for initial stroke severity can have on ranking of hospitals,” said Fonarow, director of the Ahmanson-University of California-Los Angeles Cardiomyopathy Center. “CMS [Centers for Medicare & Medicaid Services] is planning to go forward with a measure. We are hoping that they will find this information valuable and ultimately have a more valid model in place before they go forward.”
CMS and other payers already have initiated claims-based outcomes measures for acute MI, heart failure and community-acquired pneumonia. Stroke outcomes also are on the horizon, as called for under the Patient Protection and Affordable Care Act. The methods currently under consideration use claims-based data to assess and rank hospital performance.
While these claims-based models have been validated against clinical data for MI, heart failure and pneumonia, case-mix adjustment in stroke is difficult, Fonarow and colleagues wrote. They added that current models lack adjustment for stroke severity.
“The concern of many neurologists is that traditional case-mix variables that are collected in administrative data sets would not adequately be able to collect the key differences of acute ischemic stroke in patients,” Fonarow said. “Stroke severity had been shown in a number of small studies to be predictive of outcome.”
Fonarow and colleagues proposed adding the National Institutes of Health Stroke Scale (NIHSS) score to the model to evaluate the change in performance for predicting 30-day all-cause mortality in Medicare beneficiaries with acute ischemic stroke. They collected registry data from hospitals that participate in the Get With The Guidelines-Stroke program, which recommends in its guidelines using the NIHSS near admission to measure stroke severity. Their goal was to determine if adding the NIHSS data made a meaningful difference in the hospital rankings.
They analyzed data from 782 hospitals and 127,950 Medicare patients who experienced ischemic stroke with a documented NIHSS score between April 2003 and December 2009. They then evaluated the performance of claims-based hospital mortality risk models including and excluding NIHSS scores.
To mirror pay-for-performance categorizations, they ranked hospitals into one of three slots: top 20 percent, middle 60 percent and bottom 20 percent. In pay-for-performance, the top tier receives bonus payments and the bottom may be financially penalized.
They found that adding NIHSS scores substantially improved the models’ discrimination, calibration and explained variance. Using NIHSS scores, only 23 of the 39 hospitals that the claims-only model ranked as top performers retained that status while another 16 other hospitals attained that ranking. Nineteen of the 40 deemed bottom performers in the claims-only model rose in classification. In another analysis, nine hospitals that the claims-only model placed among “better-than-expected” performers were reclassified to “as expected,” and 15 of the 26 hospitals classified as “worse than expected” were reclassified “as expected.”
“It is absolutely critical that if you are going to rank hospitals on their mortality performance that the models provide adequate discrimination of risk and avoid mischaracterizing hospitals that treat more severe stroke patients and higher mortality risk,” Fonarow said.
Some hospitals that were ranked solely by claims data in the bottom 20 percent, after adequately risk-adjusted, moved to the top 20 percent, Fonarow said, suggesting that the claims-only model was lacking. “If you are getting a D or an F on one reading system and actually really deserved an A or a B for your performance, that is not an acceptable form to rank hospitals,” he said.
In an accompanying editorial, Tobias Kurth, MD, ScD, of the University of Bordeaux in France, and Mitchell S.V. Elkind, MD, of Columbia University College of Physicians and Surgeons in New York City, highlighted the fact that 294 hospitals ranked under the claims-only model were reclassified using the model that Fonarow et al proposed. They emphasized that the influence of stroke severity on outcomes may differ from the influence of severity in other conditions, where demographic and claims data may suffice.
“The assumption that what is true for myocardial infarction is also true for stroke, therefore, is flawed, as the present data underscore,” they wrote.
But Kurth and Elkind cautioned that stroke severity alone may not fully evaluate mortality from stroke, and that the location of the stroke and time from onset are important factors. Among limitations, they noted that the hospitals in the study participated in the Get With The Guidelines-Stroke program. Personnel at other hospitals may not be familiar with NIHSS or may not collect the data accurately, they wrote.
With the proper incentives, hospitals would commit to the NIHSS score training and data collection, Fonarow said. Without implementing proper risk adjustments, CMS and other payers risk misaligning incentives by punishing hospitals that take in patients with more severe strokes and rewarding hospitals that turn those patients away.
“If you are mischaracterizing hospitals, that can certainly have substantial unintended consequences,” he warned.