From record-keeping to patient communications, artificial intelligence in medicine is developing rapidly. While AI can be a useful tool for physicians and patients, it can also have its drawbacks — especially if implemented incorrectly.
Brian Gantwerker, MD, a neurosurgeon at the Craniospinal Center of Los Angeles, recently spoke with Becker's about the dangers regarding AI in medicine and where it could be headed over the next several years.
Question: What are your main concerns in regard to the utilization, or overutilization, of AI in medicine?
Dr. Brian Gantwerker: In medicine, we already have a tremendous gulf between patient and physicians. We now have utilization managers, hospital bureaucrats and insurance companies. Now, insert AI to take care of the things we don't want to do — like answer patient questions. To me, that is almost heretical. Why did we go into medicine other than to interact with patients — to build trust, to guide, to support. Without a doubt some in medicine will see this as a way to churn charts and earn revenue. That is and should be anathema to this very human art we practice.
Q: What barriers do we need to overcome before more AI can be implemented in medicine?
BG: As a piece in the media recently discussed, several CEOs and thought leaders in the AI space penned a note warning of the apocalyptic ability of AI to have dire consequences on our society. Due to a concept called the alignment problem, where the AI takes us too literally and solves the problems we put before it by means we did not think of, and that could harm patients. Let us take an example in telling an answer to patient questions. The patient contacts the bot to ask for their lab results. The bot discloses the patient has cancer. Understandably, the patient is devastated — the chatbot has disclosed important and life-altering information as a piece of code, not a person. Take a surgical example — we tell the AI surgical robot to retract the temporal lobe while clipping an aneurysm. When it does it, it tears an important draining vein and the patient is in dire danger before you even get to the aneurysm. We need to have guardrails built into these interactions we will undoubtedly have and are coming down the pike. The tech world refrain of "move fast and break things'' should not be our mantra with AI in medicine.
Q: How are you feeling about utilizing AI in your practice: cautious, optimistic, something else?
BG: AI is pretty much here already. It is in Google searches, grammar programs we use and in many of the spine robots' software routines. I will resist the temptation to outsource my medical responsibility to my patients to address their concerns and answer their questions. If a reasonable solution presents itself in billing and prior authorization, I would be interested. That will ultimately be challenged by the insurance companies who will no doubt come up with better bots, and the battle will likely continue as we will be both armed by the same software companies.
Q: Do you see a future where AI plays more of a role in patient care than an actual physician?
BG: I hope to God no. That would really be the end of the great part of medicine, which is the human interactions we have with our patients. The "Prayer of the Physician" sits on my wall, dating back to Maimonides, dating back to the 12th and 13th centuries. It speaks of our duty to care for our patients in need and of the connections we all share as people. AI is the antithesis of this, and we need to be careful before trying to create our own obsolescence.