Researchers from the Massachusetts General Hospital (MGH) in Boston designed a deep-learning algorithm that provides the reasoning behind its decision, which could help solve transparency issues associated with AI, according to a report by Health Imaging.
According to the research—originally published online in Nature Biomedical Engineering—the deep-learning algorithm reveals its reasoning through an “attention map” that highlights important regions on the images it used make its predictions. The feature also eliminates the need for radiologists to annotate large, high-quality datasets used to train most deep-learning models, the report stated.
The “black box” challenge within AI refers to the inability of AI-based systems to explain how it reached a certain decision. Transparency issues within AI and machine learning have been widely discussed by medical experts. In a recent article, experts listed several ways to improve clinical decision support systems (CDSSs) in the AI era, and incorporating transparency into the systems was named a top priority.
Overall, the deep-learning algorithm was trained—using less than 1,000 images—to detect intracranial hemorrhage (ICH) and classify its five subtypes on unenhanced head CT scans. Researchers found the model performed with comparable accuracy but higher sensitivity than trained radiologists.
To read the full Health Imaging report, click the link below.