Duke researchers to address AI transparency struggle

Duke researchers have been awarded a grant worth more than $196,000 to address the need of explainability features in AI-based, clinical decision support software versus the need to protect trade secrets for the technology—an issue researchers say has become an emerging problem in AI-enabled healthcare delivery.

The one-year grant was awarded to researchers at the Duke Law Center for Innovation Policy and Duke-Margolis Center and funded by The Greenwall Foundation, an organization that helps support bioethics research.

“Research on how to effectively integrate artificial intelligence into healthcare delivery is a new and emerging area of work for the Duke-Margolis Center,” Gregory Daniel, PhD, MPH, deputy director and policy and clinical professor at the Duke-Margolis for Health Policy, said in a statement.

“By working in collaboration with Duke Law, we can move much more quickly to identify real-world policy approaches to support emerging technologies that incorporate AI in helping physicians and patients make better healthcare decisions.”.

In their grant proposal, researchers argued that part of a healthcare professional's duty is basing decisions on sound and explainable rationales, and they have a right to understand the nature and rationale of treatments, including software-generated recommendations. However, some detailed explanations could potentially facilitate reproduction in a way that compromises trade secrets for developers.

Researchers have expressed a need for AI-based clinical decision support systems to incorporate more transparency and explain decisions so users can understand how and why certain recommendations were offered. The issue is often referred to as AI’s “black box” challenge.

“While software development has always involved trade secrecy, the importance of trade secrecy as an innovation incentive may have increased as a consequence of challenges associated with securing and enforcing software patents,” Arti Rai, the Elvin R. Latty professor at the Duke Law Center, said in a statement. “For this reason, the principal regulator of AI-based software, the FDA, as well as professional organizations, providers and insurers are actively interested in the question of how to balance explainability and trade secrecy.”

Through the grant, the research team plans to collect and review data on private and public investments in some AI-based software, according to a press release. They also plan to conduct interviews and hold private workshops with stakeholders in the healthcare regulator sector, including developers, purchases, regulators, users and patients.

The research team’s goals include: generating recommendations on appropriate levels of explainability for healthcare professionals and determining what legal or private self-regulatory approaches might be employed to do so.

Researchers plan to publish their findings and recommendations in a white paper and peer-reviewed journals. They also plan to hold a public conference to discuss the findings.

""

Danielle covers Clinical Innovation & Technology as a senior news writer for TriMed Media. Previously, she worked as a news reporter in northeast Missouri and earned a journalism degree from the University of Illinois at Urbana-Champaign. She's also a huge fan of the Chicago Cubs, Bears and Bulls. 

Trimed Popup
Trimed Popup