NYU’s Daniel Sodickson on AI, Facebook and Why Faster MR Scans Could Improve Healthcare

A new project is seeking to make MRI scans up to 10 times faster by capturing less data. NYU’s Center for Advanced Imaging Innovation and Research (CAI2R) is working with the Facebook Artificial Intelligence Research group to “train artificial neural networks to recognize the underlying structure of the images to fill in views omitted from the accelerated scan.” If MRI gets faster, it could potentially push aside X-ray and CT for some applications and avoid ionizing radiation. Here’s Daniel Sodickson, MD, PhD, vice chair for research in radiology and director of CAI2R’s take on the project.

How did NYU connect with Facebook on this project? We have been working on accelerating MRI by any means available. In 2016, we described some of the first uses of deep learning for image reconstruction from accelerated data acquisitions, and now that is an exploding area in MR research. A colleague at NYU connected us with the Facebook AI group. The challenge of reconstructing fast MR images from limited data really appealed to them, both because it raised fundamental questions for AI and because it addressed a problem with a significant impact.  

Why the need to speed up MRI scans? AI can help us gather data in new ways. Speed is one of the fundamental currencies of imaging; the faster you can go, the more information you can get. MRI can generate multiple views of the body, looking at anatomy, function and cellular microstructure. The exams typically take 15 minutes to an hour or more, which can be a real impediment when dealing with patients with chronic illness or children who struggle to stay still for extended times.

What impact could this have on healthcare? It would create a more comfortable patient experience, as well as increasing MRI accessibility in areas where there is limited access to scarce and/or oversubscribed MR machines. Faster MRI also can mean improved image quality. We’re talking about increasing the shutter speed, so that we can freeze out motion and see anatomy more clearly.

NYU and Facebook have indicated plans to open-source this research as the work moves forward. Why? We will open-source both our methods and the architecture we are using so that other researchers can work with them. We’ll also open-source the dataset, giving people access to highly specialized data that are generally difficult to gather but essential for the success of modern AI techniques like deep learning. 

-

View more features from this issue:

Building Foundations to Build Better Care

Embracing AI: Why Now Is the Time for Medical Imaging

Leveraging Technology, Data and Patient Care: How Geisinger Is Interjecting Insight & Action

Bullish on AI: The Wisconsin Way: Reengineering Imaging & Image Strategy

ML’s Role in Building Confidence and Value in Breast Imaging

Will ‘Smart’ Solutions Really Transform Cardiology?

Matching Machine Learning and Medical Imaging: Predictions for 2019

Machine Learning 101: Simplifying It One Term at a Time

Michael Walter
Michael Walter, Managing Editor

Michael has more than 16 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.