Ease of use and safety may go hand in hand, at least where electronic health records (EHRs) are concerned, according to a recent study.
Using 2017-2018 data from 112 U.S. hospitals, researchers found that the computer systems' scores for medication safety were significantly associated with clinician ratings of how well their EHRs support delivery of safe, high-quality care.
“Safety performance was associated with usability, which implies that efforts to improve usability could improve the safety of electronic health records, which makes a lot of sense,” said coauthor David W. Bates, MD, MSc, MACP. “I think most clinicians sense that just based on delivering care.” The results were published Sept. 11 by JAMA Network Open.
Dr. Bates, who is chief of general internal medicine at Brigham and Women's Hospital and a professor in the department of health policy and management at the Harvard T.H. Chan School of Public Health in Boston, recently spoke to ACP Hospitalist about the findings and what they may mean for the future of EHRs.
Q: Did you expect your study would show a link between usability and safety?
A: We hoped we'd find this, but it's always hard to know, when you're using really large-scale data like this, whether the relationship will show up. We've done a number of small-scale experiments, for example, in which we show that if you improve usability, that results in better performance at one place, but to be able to do this across the country was important and encouraging.
There's a lot of additional room for improvement in this area. The current level of performance really is just not that great, as hospitalists certainly appreciate. I'll just give you one example: When people write medication orders, they often get multiple warnings, and often the warnings about the really dangerous things look the same as the things that are not so dangerous. That's just not good from the safety perspective. It'd be better, one, not to get so many unnecessary warnings, and two, the ones that are really important should look considerably different so that you know to pay attention to them.
Q: Is alert fatigue the main way that EHR usability can affect safety? What are some others?
A: Alert fatigue is certainly one area. ... There are many more subtle things. For example, there are many times when there's some key piece of information that you need to know about, like a patient's blood pressure, that's put in a place on the computer screen where it's hard to find or people often miss it.
The same goes for tests. It's really important to highlight the truly important test results, and some results are sufficiently important that people should very definitely be paying attention to them, but most institutions don't have mechanisms for calling those out, except in limited circumstances. For example, you get a page from the lab if a glucose of 700 comes back, but there might be a really abnormal X-ray, and in many institutions, that does not necessarily get communicated to you.
Q: Could efforts to improve the usability of EHRs also affect hospitals' safety statistics?
A: We haven't shown that yet empirically, but our results suggest that that is definitely the case. There are parts of EHRs that are baked in by the vendor, and they're relatively hard for the hospital to change. But then there are other parts that are under the control of individual hospitals, and at a minimum hospitals should do a good job with those and should also be working with vendors to help identify other issues that need to be improved.
Q: Who should be involved in fixing EHRs' usability problems—hospitals, software companies?
A: I would say hospitals have been reasonably engaged, and I think the issue is more convincing the vendors to make changes, but both must work together. There is no question that hospitals have more control than they think they do, but there are many issues that really do need to be addressed by the vendors.
Q: What are some of those issues?
A: The medication-related decision support should be easier to do better. Again, far too many unimportant warnings are being displayed on a regular basis, and there's not good differentiation between the life-threatening warnings and the ones that are not. Another big area is the way that decision support is delivered. Vendors specify a very limited set of ways in which you can deliver decision support, and some of those are not very effective from the human factors perspective, so that makes it harder to get providers to do the right things.
Specific areas where that's been a problem include how labs are displayed, including important abnormals, and how some suggestions and warnings are delivered. For example, you might see that a specific test is suggested, but the computer system doesn't make it easy to then order that test, which it should. If you know that there's a consequent action that's available that you really want somebody to do, it's very helpful if that gets considered. Also, some vendor systems don't allow you to condition decision support based on the patient's characteristics. That turns out to be a huge issue, because if the patient has renal insufficiency or is older, there are often things that should be set differently for that patient.
Q: Are there many factors that should be customizable but aren't?
A: I would say that's accurate. I understand the vendors' perspective, too, in that they don't want too much customization to happen. Organizations can do a lot of customization on their own, but the issue is that you then have an upgrade every six months, and that wipes out all your previous customization. That's why customizing and recustomizing isn't the answer.
Q: The data in your study were from 2017 and 2018. Would the results change if more recent data were used, or do your findings reflect current conditions?
A: The answer is we don't really know. It'd be useful to have more recent information, certainly. Widespread adoption of electronic health records began just before the time period of our study, and now nearly all the electronic health records that are in place are vendor or vendor systems. But there have not been a lot of studies over the last few years, and more evaluation would really be helpful. More work in this area is really badly needed, and I don't think we do have a good sense of current EHR performance. … There are still whole areas where hospitals don't have important decision support in place, so, for example, they're not adjusting doses for patients with kidney dysfunction automatically, they're not adjusting the doses of drugs for patients who are older, and those are things that they could and should be doing. The reason that they haven't gotten to it is that the vendors just don't make it easy to do that, currently. The key vendors say that they do want to enable those things, but it hasn't happened yet.
Q: Do physicians then take on that role themselves?
A: Yes, and that's another area of missed opportunity. We did a study in which we showed that if you have the computer calculate [dose adjustments] for you, and then you just suggest that as the default dose, that reduces the length of stay for patients by a little over half a day. That may not sound like a lot, but it turns out to be a huge amount.
There are roughly 200 medications for which we should be routinely adjusting the dosage for patients with renal insufficiency, and there's no hospitalist who can tell me what all those 200 medications are. I certainly can't remember them. But the machine can and should be doing it. People just can't remember that many things, and even if you do remember that the dose should be adjusted, you have to do an extra calculation to determine what that adjusted dose should be, and that's just one more step to manage. We and other people have shown that when there is a calculation that needs to be done, it's always better to have the computer do it because it will make fewer mistakes.
Q: What do you predict will change about EHRs over the next 10 years?
A: It's taken longer for EHRs to improve than I hoped or expected. One big trend is that they will start to leverage artificial intelligence to make suggestions to providers that will make care more efficient and safer and higher quality. There's a lot of work to be done on the usability front, as this study shows, and most of the issues that we found with usability are still present. Usability has maybe improved a little bit since then, but not dramatically. I think that this is an urgent situation. Many people are being harmed, and in many cases the harms are the result of our failure as a system to address these issues. There are near-term opportunities to do better.