Providing detailed information on AI systems can address patient concerns

Despite AI’s potential in healthcare, major ethical concerns—like consent, transparency and responsibility—still surround the technology and its use in medicine. To address those concerns, Georgia researchers are encouraging physicians and healthcare companies to provide detailed information on AI systems to mitigate concerns patients may have with the technology.

“Hopefully, the healthcare community will collectively meet these goals by encouraging open and robust dialogue about evaluating new AI technologies and integrating them into training and patient care,” Daniel Schiff and Jason Borenstein, PhD, wrote in a commentary published in the AMA Journal of Ethics. Borenstein is the director of the Graduate Research Ethics Program and associate director of the Center for Ethics, while Schiff is a graduate student, both at the Georgia Institute of Technology.

Schiff and Borenstein noted that patient and physician perception of AI could influence decision-making—such as a patient expressing concern about using AI during a procedure or a physician being overconfident in the AI system. Additionally, transparency problems could make it difficult for patients to understand AI technology and its decision-making, which in turn could affect obtaining valid informed consent.

To mitigate patient concerns, the authors encouraged providers, who are well-informed on AI systems, to try to explain the technology to patients, including its benefits and risks, and distinguish roles between a device and human caregivers during procedures. They should also explain any potential risks that could result from human or AI errors.

“While these two recommendations are important for proper informed consent, understanding and responding to patients’ fears is also essential to good patient engagement and medical care,” Schiff and Borenstein wrote. “These two recommendations are not intended to be an exhaustive list; rather, they are a starting point for addressing sources of serious clinical and ethical concern about AI.”

The authors also encouraged healthcare organizations and doctors to create a list of primary actors that could potentially be held ethically responsible for any medical errors that may occur while using AI technology, saying “transparency and clarity about roles and responsibilities can help ensure that the responsibility net is cast neither too narrowly nor too broadly.”

  • Coders and designers
  • Medical device companies
  • Physicians and other healthcare professionals
  • Hospitals and healthcare systems

“Other actors, including regulators, insurance companies, pharmaceutical companies and medical schools, also have important responsibilities. Each actor can take steps to ensure safe, ethical use of AI systems and encourage others to do so, too,” Schiff and Borenstein wrote. “These actions can help promote coordination among the various stakeholders about the use of AI in healthcare and contribute to a clearer sense of how to assign responsibility for successes as well as errors.”

""

Danielle covers Clinical Innovation & Technology as a senior news writer for TriMed Media. Previously, she worked as a news reporter in northeast Missouri and earned a journalism degree from the University of Illinois at Urbana-Champaign. She's also a huge fan of the Chicago Cubs, Bears and Bulls. 

Trimed Popup
Trimed Popup