Medical researchers must understand risks associated with AI

AI is poised to change the healthcare industry forever—but risks remain that researchers must take seriously. The need for such caution is one of the key takeaways from a new commentary piece published in The Hill on Jan. 16.

“AI can augment and improve the healthcare system to serve more patients with fewer doctors,” wrote author Enid Montague, PhD, an associate professor of computing at DePaul University and adjunct associate professor of general internal medicine at Northwestern University. “However, health innovators need to be careful to design a system that enhances doctors’ capabilities, rather than replace them with technology and also to avoid reproducing human biases.”

Due to physician shortages throughout the world and rising rates of burnout, Montague wrote, healthcare providers desperately need the help AI technologies can offer. However, he added “AI systems can also cause problems.”

“Increased medical error is a real potential consequence of poorly designed AI in medicine,” he wrote.

Other potential problems related to AI include “complacency from physicians” and patients who are less engaged with their own healthcare, Montague added. He also emphasized the importance of AI being developed by a diversified workforce, noting that a group of all white men “may not be trained to think about bias comprehensively.”

Click the link below to read Montague’s full commentary:

Michael Walter
Michael Walter, Managing Editor

Michael has more than 16 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Trimed Popup
Trimed Popup