You might not want a computer to take your job, but what if it handled all of your documentation and coding? Most hospitalists would gladly unload those tasks, and artificial intelligence (AI) experts see that being a real possibility in the not-too-distant future.
“Documentation and coding go hand-in-hand. … and they're both very much within the domain now of a lot of exciting AI,” said Kaushik Venkatesh, MBA, MPH, a medical student at Harvard University in Boston and member of the editorial board of npj Digital Medicine, where he has published on AI and coding.
Of course, as with so many applications of AI, there are also reasons for caution. “It's the latest catchword,” said Cynthia Tang, a clinical documentation and coding expert with Pinson & Tang who is based in Houston. “Some of the things we've encountered with this so-called AI are a real problem. … You have to have a person smart enough to go, ‘This is not valid.’”
Need for change
Everyone at least agrees on why it would be good to automate some of the work of documentation and coding. “Studies show that up to 50% of physician time, depending on the specialties and care setting, is taken up by medical documentation and other administrative tasks,” said Mr. Venkatesh.
That's only the first step, noted his coauthor Joseph Kvedar, MD, a professor of dermatology at Harvard and editor-in-chief of npj Digital Medicine. “[Health care systems] employ literally an army of people to oversee documentation and make sure that we don't commit fraud unwittingly when we're submitting codes,” he said.
Current documentation is bad not just from a time perspective, but also from one of clinical comprehension, observed Ms. Tang. “I just reviewed a record that was 200 pages because every single progress note was copy and pasted and pulled in all the radiology results, all the lab reports from day one until today. It's a mess.”
The prevalence of copy-and-paste notes can make one nostalgic for the old way of documenting medical visits. “It would be great if they would be able to just see the patient and then dictate the progress note,” Ms. Tang said.
Ironically, the application of AI may involve just that. “What we see with AI might be similar to how dictation had taken over the medical documentation industry in the past,” said Mr. Venkatesh.
He sees a few options for how a patient's history of present illness (HPI) could potentially be documented using AI. “The AI system could help remove typos and streamline the narrative of a patient encounter, which is a little bit less invasive, or it could write that full encounter based on an audio recording of that encounter or based on some notes that the physician had jotted down during the conversation on the computer,” said Mr. Venkatesh.
One challenge with the idea of AI writing the full encounter is the difficulty of distinguishing between the present illness and past ones that might also be discussed during a visit. Ms. Tang has encountered this problem with fully automated coding systems. “It would pull in a diagnosis, but it might be the patient had it 20 years ago,” she said. “The [human] coder would go, ‘Oh no, this was 20 years ago, I can't code that.’”
Dr. Kvedar envisions a model that combines the intelligence of both humans and computers. He noted that his electronic health record currently requires him to choose codes after he documents a patient's care. “I can see me finishing a note, putting in the codes, and then pushing ‘finish visit,’” he said.
Then, he predicts, the AI-driven system “comes back and says, ‘Your documentation doesn't support that code. If you added one of these three things, that would align with your notes' or ‘You could be coding a higher-level visit because you did these three things. Do you wish to code more?’”
AI systems have already been learning and improving their results along similar lines, explained Jay Aslam, PhD, chief data science officer of Codametrix, an autonomous medical coding company that started as part of Mass General Brigham in Boston.
“We were trying to say, ‘Based upon the data that we've collected on individual doctors, for this doctor doing this procedure, it's likely one of these five codes' … and then we got it down to the point ‘I'm pretty sure it's just one of these three’ … and now, I'm saying with high confidence that ‘It's absolutely this code,’” he said. “That was our journey.”
The mention of procedures is significant, because some procedure-based specialties may be more easily taken through this process. “We're tackling specialty by specialty,” said Dr. Aslam. “We started off in radiology as our first commercial product. Then we're also doing pathology, GI, and surgery.”
Hospitalists' work is more difficult to code automatically, but the company is currently working on teaching AI to code inpatient medical visits. “There you get into a lot of the complexities that you would see in general internal medicine, like [evaluation and management] leveling that you have to be able to do, so we're tackling all those things right now,” he said.
Another challenge with having AI code hospital medicine is that, unlike a medical student, it won't be inclined to see the zebras, noted Mr. Venkatesh, citing one study finding that even in a very large database, thousands of ICD-10 codes appear fewer than 10 times. “There are not many instances for the AI to really learn from,” he said. “And in medicine, there are a lot of rare things out there.”
Fully automating the coding of inpatient care would also involve the complexities of selecting a principal diagnosis for a diagnosis-related group, something Ms. Tang thinks will continue to require human expertise. “You must have someone who is really well trained to do that,” she said.
Dr. Aslam did not disagree. “My view is that the human in the loop will always be there. We will automate more and more and more, but I don't know if we'll get to the point any time soon where there will be no medical coders,” he said. “What we're really trying to do is eliminate the things which are burdensome and tiresome.”
The automation of documentation will likely proceed in the same piecemeal manner, according to Mr. Venkatesh. To document an HPI, “It's slightly less necessary to have human thought and intuition and innovation,” he said. “The assessment and plan are really the most human part of this process, involving a lot of clinical reasoning, and that, to me, is what will probably go last, if at all.”
With effective integration into electronic health records, AI should also be good at compiling test and lab results into the record, Mr. Venkatesh noted.
Despite its promise to reduce physician workload, AI “should probably not be given the reins any time soon,” Mr. Venkatesh cautioned. “I think the first step is to maintain a very keen and healthy suspicion of anything that is generated by AI as we investigate accuracy and work to prevent hallucinations that might exist within the outputs.”
Hallucination is the term for AI providing information that is not true. Perhaps the best-known example in medicine is that when ChatGPT is asked to write a journal article, it will invent references that do not exist. “If hallucinations are interpreted as reality by clinicians down the line looking at documentation, it's a very, very scary situation that threatens patient safety and quality of care,” said Mr. Venkatesh.
Experts at big players in the AI and health care industries, such as Microsoft and Epic, are currently hard at work figuring out how to detect and prevent hallucinations, he added. But until that's solved, clinicians may be reluctant to incorporate the technology into their day-to-day, experts agreed.
“I think doctors are nervous about AI. They're nervous about explainability. They're nervous about reliability. Some of them are nervous about bias in the datasets of the algorithms,” said Dr. Kvedar.
Richard Pinson, MD, FACP, a clinical documentation and coding expert based in Chattanooga, Tenn., cofounder of Pinson & Tang, and ACP Hospitalist's coding columnist, seconded that perspective. “Can we really trust it?” he asked. “Are we going to have to have a human review everything to see if the conclusions are right? It's very scary.”
To help alleviate these valid concerns, it's important for AI in medicine to be “glassbox,” or transparent, about the logic behind its actions, said Dr. Aslam. “In our particular tool, for instance, you can pull up any note, and we will tell you what codes we predicted. You can click on the code and it will highlight the part of the note that is why we said that this code is appropriate.”
As another safeguard, electronic health records will likely notify users whenever information is being provided by AI, Dr. Kvedar predicted, at least for now. “Maybe there will come a time when we shrug our shoulders over this whole thing,” he added.
At that point, Dr. Kvedar hopes, physicians will be able to let the computers work without supervision and turn their focus to patients. “The proper use of these tools should free us up to be more human, because computers don't do caring, they don't do emotional intelligence, they don't do judgment very well,” he said. “So those are all things we can apply our brains to.”