Hacker accessibility and the potential for security breaches weigh heavily on the development of successful AI systems, but a more pressing threat might lie with healthcare regulators like insurance providers and billing companies, the New York Times reported of a Science study March 21.
While neural networks train themselves to complete certain tasks by analyzing mass amounts of data, it takes just a small digital manipulation to change the core function of a technology, Samuel Finlayson, a researcher at Harvard Medical School and MIT and an author of the paper, told the Times. Just as researchers have trained AI systems to detect breast cancer, a hacker could manipulate code to not flag malignancies.
Researchers have proven time and again that AI can have unintended negative consequences—self-driving cars have interpreted stop signs as yield signs, for instance, and AI-powered eyeglasses have been tricked into thinking they perceive celebrities. But on the less clinical side of things, Finlayson and his colleagues said they’re concerned about what powerful healthcare stakeholders could achieve with data manipulation.
“The inherent ambiguity in medical information, coupled with often-competing financial incentives, allows for high-stakes decisions to swing on very subtle bits of information,” Finlayson told the Times.
He said that since there is so much money exchange within the healthcare industry, insurance and billing companies are already “bilking the system” by changing billing codes and other data within their computer systems. As AI becomes more commonplace in those settings, there’s more of a possibility stakeholders could manipulate scans for better payouts or alter images for expedited regulatory approval.
“Some of the behavior is unintentional, but not all of it,” Hamsa Bastani, an assistant professor at the University of Pennsylvania who’s studied the manipulation of healthcare systems, told the Times. “There are always unintended consequences, particularly in healthcare.”
Read the full story below: