New algorithm could help solve AI’s ‘black box’ challenge

Researchers from the Massachusetts General Hospital (MGH) in Boston designed a deep-learning algorithm that provides the reasoning behind its decision, which could help solve transparency issues associated with AI, according to a report by Health Imaging.

According to the research—originally published online in Nature Biomedical Engineering—the deep-learning algorithm reveals its reasoning through an “attention map” that highlights important regions on the images it used make its predictions. The feature also eliminates the need for radiologists to annotate large, high-quality datasets used to train most deep-learning models, the report stated.  

The “black box” challenge within AI refers to the inability of AI-based systems to explain how it reached a certain decision. Transparency issues within AI and machine learning have been widely discussed by medical experts. In a recent article, experts listed several ways to improve clinical decision support systems (CDSSs) in the AI era, and incorporating transparency into the systems was named a top priority.

Overall, the deep-learning algorithm was trained—using less than 1,000 images—to detect intracranial hemorrhage (ICH) and classify its five subtypes on unenhanced head CT scans. Researchers found the model performed with comparable accuracy but higher sensitivity than trained radiologists.

To read the full Health Imaging report, click the link below.

""

Danielle covers Clinical Innovation & Technology as a senior news writer for TriMed Media. Previously, she worked as a news reporter in northeast Missouri and earned a journalism degree from the University of Illinois at Urbana-Champaign. She's also a huge fan of the Chicago Cubs, Bears and Bulls. 

Trimed Popup
Trimed Popup