AI could alter images, trick radiologists into misdiagnosing cancer patients

Machine learning (ML) algorithms could potentially be trained to alter mammography findings, tricking radiologists into make an incorrect diagnosis, according to new research published in the European Journal of Radiology.

“Most advanced ML algorithms are fundamentally opaque and as they, inevitably, find their way onto medical imaging devices and clinical workstations, we need to be aware that they may also be used to manipulate raw data and enable new ways of cyber-attacks, possibly harming patients and disrupting clinical imaging service,” wrote lead author Anton S. Becker, MD, University Hospital of Zurich in Switzerland, and colleagues.

Becker et al. selected 680 mammographic images from two public datasets to train a ML algorithm called a generative adversarial network (GAN). The goal was to use the cycle-consistent GANs model (CycleGAN) to alter mammographic images after it “learned” the difference between potential cancers and negative findings. Another 892 images were used as a test dataset, with 302 images including potential cancers and the remaining 590 serving as negative controls.

Three radiologists then read a series of images, some modified by the CycleGAN architecture and some left on their original form, and used a scale of one to five to indicate if they detected a suspicious finding. Radiologists were also asked to rate how likely it was that the image had been altered in some way. This experiment occurred twice, once at a lower resolution (256 x 256 px) and once at a higher resolution (512 x 408 px).

Overall, at the lower resolution, the team found that the CycleGAN modifications did not have a negative impact on radiologist performance. However, the specialists “could not discriminate between original and modified images.” At the higher resolution, on the other hand, radiologists had a lower cancer detection rate in the modified images than in the untouched images. Also, could detect when an image had been modified due to the improved visibility.

These findings, the authors explained, mean that a CycleGAN could potentially add or remove “suspicious features” from a patient’s imaging results. “The method is limited,” they wrote, but it should still be studied “in order to shield future devise and software from AI-mediated attacks.”

And why would combative forces be looking to make such changes to a imaging findings? Becker and colleagues offered some ideas.

“Regarding medical imaging, we can imagine two categories of attacks: focused and generalized attacks,” they wrote. “In a focused attack, an algorithm would be altered so it would misdiagnose a targeted person (e.g. political candidate or company executive) in order to achieve a certain goal (e.g. manipulation of election or hostile company takeover). In a generalized attack, a great number of devices would be infected with the malicious algorithm lying dormant most of the time and stochastically leading to a certain number of misdiagnoses, causing potentially fatal outcomes for the affected patients, increased cost for the whole healthcare system and—ultimately—undermining the public trust in the healthcare system.”

Michael Walter
Michael Walter, Managing Editor

Michael has more than 16 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Trimed Popup
Trimed Popup