Hospital Rankings: Is Top of the List Toppling True Measures of Performance?

Despite limitations, hospital rankings have become an essential tool as clinicians and executives set goals for their institutions and strive to achieve them. And patients take notice, too.

People love to rank things, from music and barbeque joints to colleges and the best places to live. A 2014 article in the Journal of Consumer Research found “best lists” are effective because of the human brain’s preference for organizing information. The so-called top-ten effect occurs because “round numbers are cognitively accessible to consumers due to their prevalent use in everyday communication” (40[6]:1181-1202).

Metrics & methodologies

Leveraging the top-ten phenomenon, an entire industry has emerged around ranking hospitals overall and by service lines. Unlike many best lists, healthcare rankings tend to be built on specific metrics, such as mortality, complications and lengths of stay. 

Ben Harder, managing editor and chief of health analysis at U.S. News & World Report (USNWR), which has been publishing hospital ranking since 1990, says more than five million users consult the USNWR ratings site every month. Clearly, public interest in rankings is part of their value. And while patients still place a lot of stock in what their doctors tell them, anecdotal evidence suggests patients are using hospital rankings as a second opinion. 

Clinicians and administrators also monitor hospital and specialty service line rankings, although for different reasons. Harder says referring physicians rely on the USNWR data to help patients and their families select specialists. Some healthcare leaders credit the best lists with motivating them to achieve better outcomes, while others may be looking to capitalize on marketing opportunities. 

It’s worth noting that some of the ranking enterprises didn’t start out with consumers as their intended audience. Jean Chenoweth, senior vice president of 100 Top Programs at IBM Watson Health—which annually publishes 100 Top Hospitals and 50 Top Cardiovascular Hospitals lists—says its initiatives were launched to measure the impact of leadership in achieving clinical process improvement. This was in 1993, during a period when a hospital could proclaim itself to be a center of excellence simply by putting an ad in the newspaper. “We were trying to do something very different because we were an analytics company,” Chenoweth says. “It was not an ad program.”

Regardless of what’s driving interest in rankings, there is evidence to proceed carefully. The results of two recent studies raised concerns about hospital rankings. In a 2016 article in the Joint Commission Journal on Quality and Patient Safety, Rush University Medical Center (RUMC) Chief Research Informatics Officer Bala Hota, MD, MPH, and colleagues described differences between the hospital’s internal data and the Medicare data used by USNWR to rank RUMC on patient safety (42[10]:439-46). A broader analysis determined “that Rush was not the only organization inaccurately and unfairly ranked,” according to an RUMC press release. USNWR has since revised its methodology. 

A second analysis, published this February in JACC: Cardiovascular Interventions, found that percutaneous coronary interventions (PCIs) performed at hospitals on the USNWR 50 Best Cardiology and Heart Surgery Hospitals list were not associated with superior outcomes compared with PCIs performed at hospitals that didn’t make the list. Their conclusion was based on comparing mortality and complications, such as kidney injury and bleeding rates between the ranked and nonranked hospitals. The authors also concluded that the ranked hospitals had a slightly lower proportion of appropriate PCIs (11[4]:342-50). 

Another issue is that that each of the various rankings enterprises measures different metrics, according to a 2015 Health Affairs study. Lead author J. Matthew Austin, PhD, of the Armstrong Institute for Patient Safety and Quality at Johns Hopkins Medicine, and colleagues reviewed the 2012-2013 rankings of four ranking enterprises—Consumer Reports, HealthGrades, The Leapfrog Group and USNWR. They found that no hospitals earned top-performer spots on all four of the rankings and only 10 percent of the 844 hospitals were rated as high performers by more than one of the rankings (34[3]:423-30).

The value terms used in hospitals rankings also can be problematic, says Austin, who began consulting with The Leapfrog Group on metrics after his team’s study was published. “Many of these rating systems use terms like ‘top’ and ‘best,’ which is a little unclear,” he says. “One does have to dig into the methodology to really understand what they report to measure.” 

Monitoring over marketing

Jeff DiLisi, MD, MBA, senior vice president and chief medical officer at Virginia Hospital Center in Arlington, loves rooting around in the methodologies behind best lists but says it’s important not to lose sight of doing what’s best for patients. “Not that we don’t follow the rankings,” he says. “But when you put the patient at the center of everything you do, the rest follows. You can pull your hair out chasing these various rankings.”

DiLisi says other quality measures also matter, such as being included in the Mayo Clinic Care Network, which requires hospitals to meet Mayo’s standards for patient-focused care. He thinks of rankings as a process improvement tool that can help hospitals focus on delivering safer care. As an example, DiLisi points to Virginia Hospital Center’s cesarean birth rate, which he says dropped from one of the highest in the country to one of the lowest in part because of the sense of urgency and “a big culture change” that arose from a low Leapfrog rating. 

While earning a spot on the rankings charts can be a marketing boon, Scott Levy, MD, chief medical officer and vice president at Doylestown Health in Pennsylvania, also prefers to use hospital rankings as a tool for monitoring how his hospital is doing. He was shocked when, two years ago, Doylestown Health’s overall Leapfrog grade dropped from an A to a C. “When we looked closer, we found that we did not meet Leapfrog’s standard for an intensivists program even though we had a different model that delivered excellent care,” he says. “We reevaluated our ICU coverage, made a few changes and returned to the A grade.”

Both DiLisi and Levy are partial to Leapfrog’s letter grading system, which is based on 27 metrics assessing how well hospitals are protecting patients from preventable errors, injuries and infections. Leapfrog President and CEO Leah Binder says the group’s ratings are intentionally narrow. “Ranking implies it always has to be about who’s the best,” she says. “For some rating systems, it’s about the top 10 or the top 100, and that’s it. I’m not sure the public always needs that. They actually simply need to know about the hospital in their community—if it’s safe at delivering a good quality of care.”

Whatever the terminology, both skeptics and supporters tend to monitor how their hospitals are doing. Rankings provide “the opportunity to compare ourselves with the best in class in the country,” says David Cheney, CEO of Sutter Medical Center in Sacramento. He likes how rankings “inform for the big picture” related to quadruple-aim-type efforts to balance quality, cost and efficiency. “Visually, [rankings are] very informative for individuals, clinical staff and internal sources, who like data and comparative analytics,” he says. 

Sutter’s clinical staff recently used data from IBM Watson Health to get to the bottom of a problem they were having with heart failure readmissions. Specifically, the data helped them track a readmissions issue to its origin in subacute settings outside the hospital. Better informed, they increased their training in those settings and deployed a number of low-tech strategies to address the medication errors and weight gain that were accounting for most of their readmissions.  

Though Sutter announces when it is included in a ranking, they don’t build marketing around the information, Cheney says. “I haven’t found it useful for public use. … As the industry improves in communicating the results, I believe rankings will become more important to patients.” 

Around the web

Several key trends were evident at the Radiological Society of North America 2024 meeting, including new CT and MR technology and evolving adoption of artificial intelligence.

Ron Blankstein, MD, professor of radiology, Harvard Medical School, explains the use of artificial intelligence to detect heart disease in non-cardiac CT exams.

Eleven medical societies have signed on to a consensus statement aimed at standardizing imaging for suspected cardiovascular infections.