“Call me old-fashioned, but no thanks,” wrote Jennifer from Virginia. “I prefer human interactions to a robot doctor.”
I empathize with concerns that some technological advances might get in the way of the patient-doctor relationship, though I think Tom’s and Jennifer’s scenarios are unlikely to materialize any time soon. Medicine is a conservative profession that’s slow to adopt change, and health-care providers are generally taking a cautious approach to AI.
Many of the current AI uses are quite banal. Adam Landman, an emergency physician and chief information officer for Mass General Brigham in Boston, gave me several examples of how his hospitals have incorporated AI technology to reduce inefficiencies.
One is in the development of staff training videos. In the past, they would hire actors to read a script. If the script needed editing, they’d have to bring back the actors. They’ve now piloted an AI product that allows users to choose an avatar and digitally enter the script. The video is created right away, and edits can be made seamlessly at a fraction of the original cost.
Another is in routing phone calls. Instead of placing callers on hold so that a human operator could manually direct them to patient rooms or schedule appointments, his hospitals use software with natural language understanding. “That’s been a huge win for patients because they get connected faster, and it frees up operators to handle more complex calls,” he said.
Landman says he thinks of these cases in terms of value and risk. “We would love to have AI help us with clinical triage and diagnosis, but that comes along with quite a bit of risk,” he said. “If we get that wrong, you could potentially really harm a patient.” It’s possible, for instance, that AI will one day serve as an initial screen for patients to advise whether they go to the emergency room, urgent care or wait to see their primary care physician. But he doesn’t think the technology is there yet.
Interestingly, quite a few hospitals, including Mass General Brigham, implemented this type of AI triage mechanism at the beginning of the pandemic for patients with suspected covid-19. Patients would enter their symptoms into a chatbot, which would then help them determine whether they needed to go to the ER or could isolate at home.
“We used that during covid because it was a public health emergency and many of these patients had no other alternatives,” he said. The value outweighed the risk at the time, but once the acute crisis abated, the risk-benefit analysis changed and they stopped using the AI screening tool.
Covid spurred AI adoption in other ways, too. I spoke with Irene Dankwa-Mullan, a physician and former chief health equity officer at IBM Watson Health, about AI’s use in epidemic tracking. She told me that AI-powered analysis of large data sets identified associations between covid mortality and social determinants of health such as median household income, speaking a language other than English and homeownership rates.
“You could also find out whether vaccination or non-pharmacological interventions worked better to reduce transmission,” she said. And AI could be used to pinpoint areas with spikes in covid to prompt health officials to target preventive measures.
One reader, Maya Reiser, wrote to me about an additional important use: As a radiologist in Maryland, she has been using AI to assist with mammograms. “It will mark areas of concern with high, intermediate, or low probability of malignancy,” she wrote. “It operates as a second pair of eyes for us, highlighting the areas we could have missed. However, it still requires our experience and expertise to interpret.”
AI-assisted mammography is similar to AI-augmented colonoscopies, which I mentioned in my column. To me, this type of technology is high on the value scale and relatively low in terms of risk. For my next mammogram, I will be asking whether my providers use AI. I hope the answer is yes. Why wouldn’t I want a second pair of eyes that can reduce missed cancer diagnoses?
To be clear, I do not discount potential future concerns over AI’s use in medicine. I think it’s imperative we proceed slowly and carefully. In my view, the best applications of AI will free up physicians’ time so that we can go back to what we came into the profession to do: Be with our patients and provide them with compassionate and high-quality care.
Are you a health-care provider who is incorporating AI into your practice or a patient who has experienced AI-augmented care yourself? I’d love to hear from you and to feature your voice in future newsletters!
link