There’s still a long way to go with both research into Alzheimer’s disease and AI tools to help detect it, but deep-learning approaches continue to show promise for classifying the condition on images of the brain.
These tools are most effective when applied to images obtained from more than one neuroimaging modality, and they’re most useful when deployed in conjunction with insights from lab tests.
Researchers at Indiana University arrived at these conclusions after reviewing the literature. Their full study is running in Frontiers in Aging Neuroscience.
Searching PubMed and Google Scholar, Andrew Saykin, PsyD, and colleagues found 16 studies that met their inclusion criteria. Four of these combined deep learning and traditional machine learning, while 12 used only deep learning approaches.
The team evaluated and classified the studies by algorithm and neuroimaging type, then summarized each paper’s findings.
They found the best classification performance came when multimodal neuroimaging information was combined with data from fluid biomarkers.
Further, accuracies reached as high as 98.8% for the AI combo and up to 96% for deep learning only.
“While it is a source of concern when experiments obtain a high accuracy using small amounts of data, especially if the method is vulnerable to overfitting, the highest accuracy of 98.8% was due to the [use of a] stacked auto encoder procedure, whereas the 96% accuracy was due to the amyloid PET scan, which included pathophysiological information regarding Alzheimer’s disease.”
The authors conclude by noting that Alzheimer’s research using deep learning is still evolving, emphasizing that the technology needs to attain higher levels of transparency and reproducibility if it is to become practical in clinical settings.