https://acphospitalist.acponline.org/archives/2019/12/free/newmans-notions-scorecard.htm
Newman's Notions | December 2019 | FREE
Most ACP Hospitalist content is available exclusively to ACP Members. This article is free to the public.

Scorecard

Sometimes it's hard to really know how you are doing, as a clinician or an organization.


Ed Koch, the late mayor of New York City who was endorsed by both the Republican and Democratic parties in the 1980s, was known for asking, “How'm I doin’?”

Illustration by David Rosenman
Illustration by David Rosenman

Sometimes it's hard to really know how you are doing, as an individual clinician or an organization. We have dashboards that give us up-to-date information and scorecards that summarize our functionality over longer periods of time. The two are frequently confused, so think of it this way. You can't safely drive a car without a dashboard. You need to know how fast you are going, how much gas is in the tank, and if your engine is running hot. A scorecard will tell you that you've driven an average of 45 miles per hour over the last three months, but not that you're currently 10 miles per hour over the limit. When there is a flashing light behind you, the officer won't care that on average you haven't been speeding over the prior two quarters.

Most organizations have a plethora of internal scorecards and a cornucopia of dashboards, each presumably leading to immediate action and innovation. However, more often than not they are misleading and misinterpreted. Whether it's length of stay, patient experience, or the dreaded category of “observed to expected,” when it comes to metrics, the quality of the data itself can be suspect, and if this is the case, then any analysis will be flawed. We hope these reports should in some way lead to action, but there's the old “garbage in, garbage out” conundrum. Unfortunately, many physician administrators will act on data that would never pass the peer-reviewed literature test. The same leader who would advise against publishing research due to shoddy statistics often accepts administrative data at face value.

Internal scorecards are a fact of life in the era of scoring systems for hospitals. It started in 1919, 100 years ago, with Ernest Codman and his “end results” theory. He wanted facilities to track their outcomes on inpatient mortality as well as posthospital survival, with a goal of improving surgical care. Surgical luminaries like Charles Mayo and Harvey Cushing supported this system, via the American College of Surgeons. They called it the Minimum Standards for Hospitals. It still exists today as The Joint Commission, although the original 18-page manual has been expanded to a considerably longer incarnation. There are numerous other rating bodies, such as U.S. News and World Report, Vizient, Leapfrog, and the five-star rating system (see July 2015 Newman's Notions), to name just a few. Each has its own secret sauce of measurement and scoring that affects the reputation of an institution and the workload of administrators.

Some metrics make sense at face value but on further cogitation only lead down the veritable Carrollian rabbit hole. One great example of this is hospital length of stay (LOS). This should be straightforward—it's the length of time the patient is in the hospital. But is it? Suppose you check into a hotel and park your car in a nearby garage, arriving Monday and leaving Thursday. You pay for three days at the hotel—Monday, Tuesday, and Wednesday nights. However, you are charged for four days of parking—Monday through Thursday. So what is the LOS, three days or four?

When you try to calculate an average LOS, is it the stay of patients currently in the hospital or that of patients who have been discharged? If you have a patient in for 198 days, when does that long stay hit the books? And if a patient is admitted at 11 p.m. and discharged two hours later at 1 a.m. the next day, is that two days? Is it even one? It is deceptively simple and decidedly complex.

One way in which risk adjusters approach LOS is to examine what is expected and then compare the ratio of observed to expected (O/E ratio). A patient with one medical problem and no comorbidities might be expected to stay in the hospital a shorter amount of time than a much sicker one. For example, the patient with GI bleeding who lives at home and took too much ibuprofen is vastly different from the cirrhotic patient on dialysis who lives in a nursing home. And they are both different from the patient with angiodysplasia and atrial fibrillation who is taking warfarin and happens to be homeless. The O/E ratio is of vital interest to those who are vitally interested in it. Given these complexities, the expected LOS is a function of documentation accuracy. If your O/E is less than 1.0, you're doing better than expected.

So, “How'm I doin’?” Some days I think I'm doing just fine.

When I was younger, I expected that by this time in my life I'd be a marathon-running CEO with a Ferrari and homes in Aspen, Nice, and Tokyo. The observed is quite different. Despite the vagaries of gravity and entropy, I'm still waddling along, built for comfort not speed. I do have a home, but it's in Minnesota. I have a happy career and love my children. And I get to write for ACP Hospitalist. Dang, my O/E ratio is way over 1.0.