A free web tool known as “Chester the AI Radiology Assistant” can assess a person’s chest X-rays online within seconds, ensuring patients’ private medical data remains secure while predicting their likelihood of having 14 diseases.
Chester, though still rudimentary, can process a user’s upload and output diagnostic predictions for atelectasis, cardiomegaly, effusion, infiltration, masses, nodules, pneumonia, pneumothorax, consolidation, edema, emphysema, fibrosis, pleural thickening and hernias with 80 percent accuracy. A green-and-red sliding scale pinpoints the diagnostic probability for each condition, ranging from “healthy” to “risk.”
Joseph Paul Cohen and his colleagues at the Montreal Institute for Learning Algorithms debuted Chester in a paper published earlier this year, where they wrote they were looking to build a system that could scale with minimal computational cost while preserving privacy and diagnostic accuracy. While anyone with access to a web browser can use the tool—that includes smartphones, too—it’s intended to supplement a professional’s opinion, not replace it.
“Deep learning has shown promise to augment radiologists and improve the standard of care globally,” Cohen et al. wrote in their paper. “Two main issues that complicate deploying these systems are patient privacy and scaling to the global population.”
The team developed Chester using an implementation of the previously established CheXnet DenseNet-121 model and the same train-validation-test split as Rajpurkar et al.’s initial 2017 paper on the subject. The tech allowed Chester to analyze chest scans with a web-based, but locally run, system.
The tool’s interface is designed to be simple and is comprised of three main components: out-of-distribution detection, disease prediction and prediction explanation. After an individual uploads their X-ray Chester takes around 12 seconds to initially load models, 1.3 seconds to compute relevant graphs and an additional 17 seconds to compute gradients to explain its predictions.
A separate function allows patients to view a heatmap of image regions that influenced Chester’s prognosis, and at any time they can view an out-of-distribution heatmap of where the image varied from the team’s training distribution. If the heatmap is too bright, that’s an indication the patient’s image is too different from Chester’s training dataset for the tool to make an accurate prediction.
“We will prevent an image from being processed if it is not similar enough to our training data in order to prevent errors in predictions,” the developers warned.
Cohen and colleagues said they created Chester for the medical community, so researchers can “experiment with it to figure out how it can be useful.” Other aims included:
- Demonstrating the strength of open datasets and advocating for more unrestricted public dataset creation projects.
- Establishing a lower bound of care—the team said all radiologists should be “no worse” than Chester.
- Designing a model that could be copied to globally scale health solutions without untenable costs.
- Demonstrating that patient privacy can be preserved while using a web-delivered system.
“This tool can be used as an assistant and as a teaching tool,” Cohen and co-authors wrote. “The system is designed to process everything locally, which ensures patient privacy as well as enables it to scale from 1 to 1 million users with negligible overhead. We hope this prompts radiologists to give us feedback which would help us improve this tool and adapt it to their needs.”