Dictation, done better

Speech recognition errors spur an accuracy makeover.


Rather than using both hands to type his addendum notes, hospitalist Niraj Patel, MD, uses his voice. Dictating, he said, cuts the time it takes to complete a note from about 10 minutes to somewhere between 2 and 5 minutes.

Freeing up his hands allows for multitasking. “I can physically scroll through the labs and scroll through the resident's note in real time, so for myself, it's just way faster,” said Dr. Patel, an attending at Lankenau Medical Center in Wynnewood, Pa. “I also feel like it's a little bit more fluid in explaining my thought process.”

Photo by Thinkstock
Photo by Thinkstock

Physicians have documented using medical dictation software (e.g., Nuance Communications' Dragon products and M*Modal Fluency for Transcription) since EHRs began to trickle into the mainstream. However, for all its longevity and reported advantages, the technology still has its quirks.

In 3 years of using the software as a hospitalist, Dr. Patel said he sees speech recognition errors all the time. “It's getting better. However, it still has multiple errors—some that I catch, some, unfortunately, that I do not catch,” he said. “And then my residents will come up to me and say, ‘Did you really mean that?’”

Experts in the field pinpoint these mistakes, which range from goofily benign to life-threatening, as medical dictation's biggest drawback. But as researchers continue to investigate the potential benefits and harms of the technology, new efforts aim to make it smarter.

Learning curve

Whereas hospitals have traditionally used medical transcriptionists to correct speech recognition errors on the back end, many doctors now use front-end speech recognition, entering their dictated notes into the EHR themselves after proofreading them. Before they're ready to dictate, though, they must train the software to recognize their voices.

For Dr. Patel, the first training session involved reading a cardiology note for 15 minutes. “It tells you to go faster or slower, speak louder, speak clearer, and then even after that, you can easily go back and teach it new words,” he said.

For example, Dr. Patel needed to retrain it to recognize the word “antibiotic” (which, for some reason, it had persistently picked up as “any biotic”), and he also trained it to identify the novel anticoagulants. “Let alone me remembering how to spell rivaroxaban, apixaban, or dabigatran—just me saying it and typing it out once is all that I need,” he said.

However, research suggests that mistakes are common. Informatics researcher Foster R. Goss, DO, MMSc, an emergency physician at the University of Colorado Hospital, noted multiple cases involving speech recognition errors in the ED. “One of them was actually presented at one of our morbidity and mortality conferences at the hospital because the error was significant enough that it caused a patient to get an antibiotic that would have been contraindicated in pregnancy,” he said.

In order to quantify the speech recognition errors that were occurring in the ED, Dr. Goss assessed a random sample of 100 notes dictated by attending emergency physicians who were longtime users of the software. He found 128 errors in total (1.3 errors per note), 19 of which (about 15%) had the potential to impact patient care, according to results published in the September 2016 International Journal of Medical Informatics. All told, 71% of notes contained errors, most of which were enunciation-based.

Potential benefits

Researchers are also assessing the reported advantages of dictated over self-typed documentation. A randomized controlled trial, published in March 2015 by the Journal of Medical Internet Research, compared the methods using 1,455 clinical reports generated by 28 physicians from the departments of pediatrics and trauma/surgery at a university hospital in Germany.

Compared to those who typed their notes, those who used speech recognition had a 26% overall increase in documentation speed (217 vs. 173 characters per minute) and logged longer notes (649 vs. 356 characters per report). These results, which also showed a slight mood boost among intervention physicians, included the time it took to make corrections post-dictation.

Other studies haven't been so favorable, such as one published in the July 2014 Western Journal of Emergency Medicine. Researchers compared emergency physician time use and interruptions, defined as a change in task with the previous task left incomplete or truncated, at 2 teaching hospital EDs (one used typed data entry, the other voice recognition). After accounting for the time it took to make corrections, researchers found no statistically significant differences in the time spent charting (29.4% typed vs. 27.5% voice) or the time allocated to direct patient care (30.7% vs. 30.8%).

There were, however, significant differences between the 2 in terms of interruptions per hour (5.33 typed vs. 3.47 voice). Lead author Jonathan dela Cruz, MD, said he got the idea to study interruptions after noticing that physicians who were using the microphone weren't interrupted as often. “Most of the literature [suggests that] any time someone is interrupted in their thought process, there's a higher risk for a task to be incomplete,” he said. “So the more times you get interrupted, the more at risk you are for creating a medical error or creating a delay in care.”

However, anecdotal observations suggest that these findings may no longer be accurate, said Dr. dela Cruz, an associate professor and director of research and ultrasound at Southern Illinois University School of Medicine in Springfield. “Our residents use dictation a lot, but they still get interrupted by nursing staff, which I didn't see when I was originally doing this paper and when we originally implemented Dragon,” he said. “I think there's this cultural shift where they see that it's OK to interrupt certain tasks.”

Challenges

Logistical challenges can also detract from the full functionality of the software. At Dr. Patel's hospital, 2 hospitalists' offices offer the ability to dictate through personal computers, and about 1 to 2 computers per floor also have the software. “I don't like using those because they're in a loud environment...so I sit down in our office and use it there,” he said.

If it weren't for the problem of finding a dictation-capable computer, Dr. Patel said he would dictate his clinical notes in addition to addendums. “I try to do notes in real time when I am the only one seeing the patient,” he said. “If it was easier, I would [dictate them] indeed. However, practically, it is not possible.”

Dr. Patel also noted that prior versions of the technology, albeit less accurate, were much faster at actually putting the words on the screen. “Now I have to dictate an entire sentence, wait, and then it will pop up,” he said. “So it's a big delay, whereas before, I would speak a word and literally in milliseconds the word would already be on the screen.”

There are also issues with EHR integration, so Dr. Patel dictates into the software's text box before copying the note and pasting it into the record. “It is slightly time consuming, but it is faster than me having to tab through the entire note because the way that it dictates into our EHR is very, very slow,” he said. “But I think that's just because it has to cross platforms.”

At Dr. dela Cruz's institution, many residents and attendings have adopted this technology, but interns are no longer allowed to use it. “We saw that they were much slower earlier on in their second year than if they had actually typed,” he said. “If we start our residents dictating initially, sometimes I feel like their critical thinking isn't as well-practiced. I think if you're dictating into a phone, you kind of pontificate into the air, while when you're typing you're actively thinking a little bit more.”

Getting smarter

Dr. Goss said his errors study was pilot work for what is now a large R01 grant project funded by the Agency for Healthcare Research and Quality. Using data from the study, he and colleagues from Brigham and Women's Hospital in Boston are developing artificial intelligence (AI) tools to screen notes for errors and improve the quality and accuracy of dictated documents.

“I dictate my notes in the ED, and there's still a considerable amount of errors that require me to go back through my notes and proofread them very carefully,” said Dr. Goss, who is also a clinical informaticist. “That editing is something that we think could be automated using AI to basically read your note and identify anything that could potentially have been the result of a speech recognition error.”

Over the next 3 years, the researchers will develop their knowledge base of error types and create natural language processing and machine learning tools that can identify them before they're entered into the EHR, Dr. Goss said. “It's going to be really amazing once we're able to send the notes to our system, process those notes in real time, and hopefully catch those errors before they get permanently entered into the patients' medical records,” he said.

However, the tricky part is getting the tools to understand the context of the information that's being used, said Dr. Goss. “For example, the antibiotic gentamicin would not be used routinely in the context of a skin infection, but we found instances where sound-alike medications are entered in error. In this case, the medication actually prescribed was clindamycin,” he said. “It's a little bit more challenging, but that's where some of the machine learning and natural language processing can help us in developing these automated tools to help identify the errors that go beyond just pattern recognition, taking context into consideration.”

Although speech recognition research has, for the most part, occurred outside hospital medicine, this work aptly applies to hospitalists, Dr. Goss added. “There's been a push at some hospitals to actually transition providers to front-end speech recognition, so I think we will only see more and more use of front-end speech recognition,” he said, noting that his team will likely begin publishing its findings this coming spring.

Dr. Patel, the Pennsylvania hospitalist, said he would warmly welcome such a predictive tool. “That would be incredible if that could come down the line, and I think that would get more people on board with it,” he said. “That's one of the frustrations that we have in our group: When people try it out, they say, ‘I spend more time fixing it than I do actually dictating.’”