If an AI-brained surgical robot refuses to do its job at a critical moment in the OR, who is responsible for stepping in?
The scenario may seem farfetched or at least far premature given the state of the technology in 2020. However, it’s not too soon to begin thinking such things through, suggest the authors of a new whitepaper on the ethics of AI in, specifically, surgery.
“A prescriptive plan for future AI systems would be, in some sense, like making life choices by gazing into a crystal ball,” they write. “However, it will be important to remember 1) that the nature of ethical challenges of AI in surgery will remain dynamic for some time, due to the evolving and constantly shifting technological capabilities, and 2) increasing AI autonomy will drastically expand the ethical paradigm and the challenges that come with it.”
The authors are Frank Rudzicz, PhD, and doctoral candidate Raeid Saqur, both of the University of Toronto and the Vector Institute of AI in that same city.
In coverage of the paper at VentureBeat, AI watcher Kyle Wiggers underscores the authors’ argument that the pros of adopting AI for surgery show potential to outweigh the cons—especially for the discrete task of avoiding harms.
For example, in thyroidectomy, there’s risk of permanent hypoparathyroidism and recurrent nerve injury, Wiggers notes.
“It might take thousands of procedures with a new method to observe statistically significant changes, which an individual surgeon might never observe—at least not in a short time frame,” he writes. “However, a repository of AI-based analytics aggregating these thousands of cases from hundreds of sites would be able to discern and communicate those significant patterns.”