AI developers working to refine COVID-19 detection in chest x-rays must navigate two unavoidable speed bumps.
The first is getting data from hospitals on which to train their algorithms. The second is validating their approach with accurate results over exciting methodologies.
In an article posted May 5, the free online magazine Physics speaks with several researchers who are succeeding, at least to one degree or another, on both those fronts.
One is computer scientist Joseph Paul Cohen, PhD, of the University of Montreal. A year ago AIin.Healthcare reported on his AI tool called Chester, which at the time could distinguish between 14 lung conditions.
Now Physics senior editor Katherine Wright reports that, when COVID-19 exploded into a pandemic, he wanted to expand Chester’s capabilities so it could help in the fight.
However, Cohen “quickly came up against a wall because hospitals didn’t want to share their data,” Wright writes.
Cohen adds, in his own words: “It’s just a nightmare to get images.”
Another source Wright quotes is engineering scientist Alistair Johnson, DPhil, of the Massachusetts Institute of Technology. He notes that the applicability and reliability of any AI algorithm is subject to several variables.
In the case of AI for x-ray interpretation, such confounders might include the x-ray equipment, the patient’s position during image acquisition and/or the protocols of the facility at which the patient was imaged.
“There are all sorts of biases that can trip you up,” Johnson says.
Of course, to an AI developer with a clear goal in sight, no speed bump is uncrossable.
“In places like New York, where there was this huge explosion of [COVID-19] patients, they’re already taking chest x-rays alongside viral testing,” says Alexander Wong, P.Eng, PhD, of the University of Waterloo in Canada. “So why not have a greater immediate impact by building AI [software] to help screen through all those images?”
Click here to read the rest in Physics, which is produced by the American Physical Society.