Not just for years but for decades, the department of radiology at the University of Wisconsin School of Medicine and Public Health in Madison has been leading the charge on creating innovative technology and translating imaging research into clinical practice.
Here we find some of the most progressive thinking and well-defined strategy in the revolution that is artificial intelligence. UW Health professionals are creating ways to “see” where humans cannot, better harnessing structured and unstructured data, and driving more predictive medicine. The research is advanced and vast, and the clinical translations are ahead of the curve.
For example, the department is the vanguard in virtual colonography, having built a repository of 10,000-plus image datasets for training. When brought to bear on medical questions from real patients and their doctors, ML-powered innovation will not only find polyps and cancers of the colon but also flag incidental findings and drive best practices to improve patient health. More actionable insight is always the goal.
At the birthplace of digital subtraction angiography and many methods for CT reconstruction, new work in ML and DL are refining image processing and reconstruction, enhancing image interpretation and reducing EHR burdens for physicians. They’re also looking within radiology at areas such as bringing CT image quality to PET/MR and beyond to dermatology and ophthalmology.
Further, each month the school brings together radiologists, medical physicists and biomedical engineers, among other credentialed enthusiasts of medical AI, to discuss machine learning (ML) research in a focus-group setting.
The number of UW labs now using ML applications as investigational techniques is “somewhat staggering,” says Richard Bruce, MD, medical director of radiology informatics. “We have one of the largest medical physics departments in the world, with approximately 30 faculty and dozens of scientists and graduate students. There probably is not a single area that is not actively involved in using machine learning in some way.”
Given all that, it’s no wonder the scientific publisher Elsevier recently ranked UW’s radiology department third in the world for its impact on the field of medical imaging.
“A ton of work is being done here now” with machine learning, says John Garrett, PhD, director of radiology informatics. “We may be a little less focused than some other places on detection-type tasks or [cancer] classification, but we’re getting a lot done on image processing and image reconstruction,” not to mention image quality control and worklist prioritization for diagnosticians.
One of the department’s major focuses with ML is supplying radiologists with heretofore unattainable image data. An especially keen interest is using algorithms to extract aspects of images the radiologist could not possibly see “even if you had an hour to visually scour the images,” explains Gary Wendt, MD, MBA, enterprise director of medical imaging and vice chair of informatics. “We want to be able to bring out features and actionable data, particularly structured data, that even a great radiologist could never come up with, no matter how hard he or she looked at the images.”
Wendt, Bruce and Garrett shared their thinking on the current state and future outlook for AI in medical imaging in a roundtable discussion with AI in Healthcare.
TOWARD A MORE PERFECT EHR
Wendt, who is regarded by many in the broader medical imaging community as a pioneer of the radiological sciences, emphasizes that UW’s work with ML for image analysis extends beyond radiology. In fact, the institution has been working with visible-light imaging—aka photography and video—for more than a decade and a half. “This is going to be a very important area for deep learning in the future,” Wendt says before citing as examples melanoma photos from dermatology and retinopathy images from ophthalmology.
“If there are not enough ophthalmologists to screen every patient who has diabetes, and you had an AI algorithm that detected early diabetic retinopathy, you could plug in the critical patients and get them the care they need sooner,” Wendt says. He points out that, for patients, this scenario would represent a financial as well a medical victory: Diabetic retinopathy that leads to blindness both tragically changes a life and costs a lot to care for.
Building on its promise to help with discrete diagnoses in particular circumstances, ML also stands to help electronic health records (EHRs) deliver on their potential to present clinicians with a holistic view of each patient so they can treat not just a condition but also a person. They seek to offer clinicians the most essential view of the patient at the time of care as well as shining light on patient and population data longitudinally and even into the future to improve health and care.
“The adoption of EHRs is to a level where they can deliver all the clinical care, and there is finally the opportunity to be able to focus on how to use that data, how to deliver a better experience, how to optimize things,” says Bruce. Look for AI to build predictive algorithms that can be directly integrated into the EHR, he says.
“A lot of it is really about the nuts and bolts of looking at what’s there and figuring out how you can do a better job with the data that already exist. It’s dealing with the data overload,” Bruce adds. “It’s not necessarily always about providing something completely new but about how to simply bring attention to the things that are already there. How do we reduce the noise? How do we bring to the top all those related things…so we can paint the most compelling picture, story, of the patient.”
That includes cutting through the din of information overload and helping to triage patients as well as guide steps in their care pathways. In radiology, for example, this means prioritizing work lists so that radiologists read the most urgent imaging exams first. Referring physicians benefit by both getting results fastest on patients who need intervention soonest and by collaborating with radiologists who can.
EXTENDED LIFESPANS FOR DYNAMIC DATA
What sort of computing power will it take for medical informaticists to support clinicians in this emerging world of machine-aided image analysis, disease diagnosis, case prioritization and care decision-making? A lot, and the UW team is refining its data strategy and plan.
“Over the next five years or so, one of the biggest changes in informatics is going to be a transition from data collection, data consumption and data archiving to a model where data are collected and stored, but they’re readily available to be used again and again,” Garrett says.
Images won’t be processed and consumed—meaning interpreted—only to be set aside. Rather, clinical data will continually be revisited to guide decisions around future episodes of care.
“From an informatics standpoint, one of the most dramatic shifts is going to be toward a transient storage type of framework,” Garrett continues, “where data are being held very close so they can be brought up and used for a variety of purposes.”
As this long-view management of clinical data becomes ubiquitous, the practice of radiology will be elevated, Wendt suggests.
Machine learning is “going to make us more efficient, make us produce better output, so that our reports are more consumable,” Wendt says. “In the near term, that’s going to be the big benefit. Longer term, data scientists like John Garrett are developing algorithms that we’re now just dreaming about.”
Garrett doesn’t shrink away from the implied friendly pressure. Quite the opposite. He sees ML in the near future facilitating closer collaboration among and between medical specialties that have grown distant from one another due to the sheer volume of clinical knowledge that medical research has produced over the years. Sometimes, he notes, there’s even a chasm between subspecialties within a given specialty.
“It’s very difficult to be a state-of-the-art facility without people in siloes,” Garrett says. “One of the potential advantages of AI, both in radiology and in medicine generally, is that it can start to bridge some of those specialties and subspecialties.”
Garrett gives as an example the nuanced, finely specific data on brain structure now available. “You have a neuroradiologist who has read an imaging exam and has dictated a report that is really tailored to his or her expertise,” he says. With a little help from an AI app, a neurosurgeon on the receiving end of the report “may be able to do even more than they can now.”
Bruce also sees big changes for the role of the radiologist in the era of AI. “In many ways,” he says, “AI will help us to deliver deeper, richer, more contextually sensitive information to providers. It will make us better [clinical] consultants than we can be today.”
Wendt nods in agreement, offering that AI and ML will enable a more proactive approach to medicine than has been possible up till now. This might be nowhere more evident than in cases where an algorithm makes an incidental finding, such as the virtual colonography solution mentioned earlier.
“You’ll be able to actually identify areas of problems that really weren’t questioned,” Wendt says. He names as another example a patient sent for a trauma CT following a fall or car accident. An algorithm will tell if the patient has osteopenia, or weakening of the bones, from that same exam “where that really wasn’t the clinical question that was asked by the trauma physician who ordered the CT scan.”
Bruce amplifies one of the most commonly cited benefits of ML for radiology—handling basic yet time-consuming tasks such as measuring, quantifying and segmenting tumors.
“You can see that as better efficiency for radiology, or you can see it as the patient and medicine in general deriving more value and benefit from exams that are already ordered or completed,” Bruce says. “This is a unique time. It’s a transformative time in medicine.”
View more features from this issue: