Would you trust ChatGPT as your medical scribe? Here's what 9 orthopedic, neurosurgeons had to say

Orthopedic

The usage of OpenAI chatbot ChatGPT is increasing in the medical field rapidly. As the chatbot learns to pass medical exams, physicians are left asking how much they trust the application in the operating room. 

While some surgeons are willing to put full trust in ChatGPT for tasks including notetaking and paperwork management, others are not so sure. 

Editor's note: Responses have been lightly edited for clarity and length. 

Question: Would you trust ChatGPT to be the sole medical scribe/notetaker during patient appointments? 

Adam Bruggeman, MD. Orthopedic Surgeon at Texas Spine Care Center (San Antonio): Artificial intelligence and machine learning provide a tremendous opportunity to reduce costs and improve efficiencies in physician practices, particularly in a tight labor market. I think this application (scribing an office visit) has value and would be helpful for many practices, but being the sole note taker seems a step too far. Ambient documentation works well as an adjunct to get 90 percent of the note complete and then still should be reviewed and edited by the physician who performed the exam and interviewed the patient.

Brian Gantwerker, MD. Neurosurgeon at the Craniospinal Center of Los Angeles: The temptations of "it will simplify your life" to "workflow automation" are frequent refrains when people sing the praises of ChatGPT. I feel that there are things that are best left up to the human beings in the room. ChatGPT is, like many things in technology, a tool. My concern is that there are folks out there who are looking for ways to bill more and more, without regards to a patient's wellbeing. 

We, as physicians, need the moral backbone to relegate technology to a place outside of the operating room, outside of the clinic, and as far away from patient care as possible. On the billing end, or payer end, I think there is some running room. I look at some of the more breathless adopters putting out papers and going on to media outlets talking about how ChatGPT "passed the boards." It will never pass the test of human interaction. In fact, it shouldn’t even be given the exam.  

Doug Platy, DO. Spine Surgeon at Inland Northwest Spine (Coeur d'Alene, Id.): I would trust it to write down exactly what is being said during a patient appointment, obviously with the expected errors which we even encounter from dictation. However, this leaves little room for summarization and obviously any needed interpretation and thought on the surgeon’s part, and that would need to be left to the physician to dictate.

Harel Deutsch, MD. Neurosurgeon at Midwest Orthopaedics at Rush (Chicago): Yes. I would trust ChatGPT, or other such services. There can always be errors and, in the past, humans were better at transcribing functions than machines, but that time has come to an end and we are likely in a period where AI transcription will be superior to human transcription. I think the real question will be whether you can trust ChatGPT to be the sole medical provider in the room. 

Issada Thongtrangan, MD. Orthopedic Surgeon at Microspine (Scottsdale, Ariz.): In my opinion, it is not yet ready for a prime-time clinical diagnosis and treatment until it has been validated. It may be useful and effective in scribing modes, but still needs to be checked before you sign off. In general, doctors should not be using ChatGPT by itself to practice medicine.

There is a recent study to evaluate how the chatbot performs at diagnosing patients. When presented with hypothetical cases, the results showed that ChatGPT could produce a correct diagnosis accurately at close to the level of a third- or fourth-year medical student. I assume that the commercial payers are looking to invest in this technology using AI algorithms or ChatGPT in their workflows to tighten up the authorization process.

Kenneth Nwosu, MD. Spine Surgeon at NeoSpine (Burien and Puyallup, Wash.): At this juncture, no. Because of increasing surgery denials, accurate and complete patient encounter documentation is of utmost importance. Hence, the risk of exacerbating the aforementioned issues does not outweigh the benefit of efficiencies provided by AI instruments such as ChatGPT at this time. Perhaps a spine surgery tailored version will be more useful in the future. This model will require physician-specific training to account for heterogeneity among spine surgeons. 

Kevin Stone, MD. Orthopedic Surgeon at the Stone Clinic (San Francisco): If editing occurs afterwards, yes. 

Lali Sekhon, MD. Neurosurgeon at Reno (Nev.) Orthopedic Center: Right now, I use a digital scribe for transcription, DragonDictate. At times, it’s excellent. At times, it’s terrible. I have the following disclaimer in my dictation template:

"The electronic medical records system uses DragonDictate voice transcription software. This is not a formal transcription service, but is partially electronic. Whilst I have checked the document for errors and corrected them to the best of my ability, there may be grammatical and typographical errors, as well as incorrect words placed due to the transcription software that I did not detect and correct. They have no clinical impact on the intent of the note and treatment of the patient. I will attempt to correct the errors as the visit(s) progresses, however there will be missed attempts and I do reserve the right to correct the errors at any time in the future." 

I think ChatGPT would currently be the same. Amongst many things, the medical record is a legal document. It’s up to the provider to check what is put into the record under their name with or without disclaimers. Like most new technology, it’s in beta stages at the moment and I suspect it’s only about as good as Dragon.

Praveen Mummaneni, MD. Neurosurgeon at University of California San Francisco: I would not trust ChatGPT to be my scribe. Medical notes must adhere to privacy rules, and I am not confident an online computer platform will adhere to the rules for a HIPAA compliant note. Furthermore, the notes need to be proof read to ensure there are no errors transcribed. If an error is found, then would the physician manually correct it? Would the physician take time to make sure that ChatGPT got the feedback? This platform is not ready for this type of use. 

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Webinars

Featured Whitepapers