The Georgetown University Medical Humanities Initiative, a collaboration between the College of Arts & Sciences and the Georgetown University Medical Center to promote learning at the intersections of the humanities and health, welcomed experts in history, medicine and artificial intelligence (AI) for a panel on AI’s growing role in medicine and its enduring humanistic dimensions.
The event, titled “Signs, Systems, Stories: Diagnosis & Being Human in the Age of AI,” offered Georgetown students and faculty an opportunity to learn about the possibilities and limitations of AI in the medical field.
Dr. Adam Rodman, assistant professor of medicine at Harvard Medical School, said a majority of medical fields, such as clinical diagnosis and data management, have adopted AI in some form or another.
“It’s routinely being used,” Rodman said at the event. “It’s definitely not perfect but it’s supplanted old tools, old databases.”
Rodman added that he has begun to use AI in his medical and research practices as a supplement to his own work.
“As a researcher, I use it as a thought partner,” Rodman said at the event. “At times, I’ve asked patients for permission, then used audiovisual AI as a third partner or second opinion.”
Katie Palmer, a health technology correspondent at STAT News, said many forms of AI used in medicine are not approved by the U.S. Food and Drug Administration (FDA) due to AI’s rapid integration.
“It’s easy to assume that every medical technology has a rule book. That’s almost always not the case,” Palmer said at the event. “A small fraction of medical devices using AI that are being used are FDA-approved. Anything using generative AI is not regulated by the FDA and it’s hard to tell what the future of that will be.”
Arjun Manrai, assistant professor of biomedical informatics at Harvard Medical School, said developments in AI often outpace doctors’ understanding of how they work.
“We don’t really understand how exactly these models work. My friends in AI can’t explain some of what is happening there,” Manrai said at the event. “For medicine, they’re releasing the models before this understanding is established, so we’re using the release to see what they can do and approaching them that way.”
Palmer said AI continues to suffer from implicit biases, leading to discrimination in its application in the medical field, and potential budget cuts will only worsen this discrimination.
“It can be hard to make sure these AI systems are learning from studies that are validated and equally accurate across demographic groups,” Palmer said. “It is difficult to show that discrimination does not result from the use of these systems, and ensuring data is accurate to different groups requires money and resources, and a lot of that funding is certainly being threatened by budget cuts.”
According to Rodman, the overall lack of regulation and security surrounding AI can decrease the trust in these tools and stall progress.
Rodman said he encouraged researchers to participate in the AI conversation.
“Right now, these tools don’t come with instruction manuals, and now we as researchers write the instruction manuals,” Rodman said. “You can’t not engage.”
Nicoletta Pireddu, a professor of Italian and comparative literature at Georgetown and director of the Georgetown Humanities Initiative, said centering the human experience is crucial in the practice of medicine, even with AI.
“The difference between humans and AI is the effects of experience and the world,” Pireddu said at the event. “Emotions still matter in medicine.”
Manrai said that, despite risks, these tools demonstrate hope for making medical advances, especially in improving access to quality and equitable care.“There’s not as much medical education happening around capabilities and limitations of AI, but carefully applied, these tools can even help improve disparities, including accessibility,” Manrai said. “There is an access problem in medicine and medical information, and maybe we can see AI provide some hope there.”