The blame game: Who takes responsibility for AI’s mistakes?

Recent advances in AI have enabled positive change in numerous areas, including public safety, sustainability and healthcare. But when algorithms go awry—as some inevitably will—who should shoulder the blame?

The easy answer, Yochanan E. Bigman and co-authors wrote in Trends in Cognitive Sciences, is that the humans who program those faulty robots should be held responsible for any harm they might do. But with the development of advanced solutions like deep neural networks, the technology is learning and growing on its own, establishing a sense of autonomy that leaves us in a moral gray area.

Bigman and colleagues said scientists consider autonomy as a robot’s ability to function alone in dynamic, real-world environments for extended periods of time, but for laypeople the idea is simpler: autonomy is equated with mental capacity. How people perceive something’s “mind” predicts their moral judgments, and as robots become increasingly autonomous their potential for moral responsibility will escalate.

“Some may balk at the idea that robots have (or will have) any human-like mental capacities, but people also long balked at the idea that animals had minds, and now think of them as having rich inner lives,” the authors wrote. “Of course, animals are flesh and blood whereas machines are silicon and circuits, but research emphasizes that minds are always matters of perception.”

Certain factors will influence humans’ perceptions of robots more than others, they said. For example, people are more likely to attribute moral responsibility to machines with human-like bodies, faces and voices than a software system alone.

It’s also more likely that humans will ascribe responsibility to robots when they’re perceived as having intentionality, situational awareness and free will. All of those characteristics are relatively predictable today given the transparency of AI’s human programming, but advances like neural networks that allow machines to learn from themselves without human input means the inner workings of machines are becoming less clear.

“As technology advances, these increased capacities (e.g. the ability to walk, shoot, operate and drive) will allow robots to cause more damage to humans,” Bigman et al. wrote. “If people cannot find another person to hold responsible, they will seek other agents, including corporations and gods, and infer the capacity for intention.

“This link between suffering and intention means that the more robots cause damage, the more they will seem to possess intentionality, and thus lead to increased perceptions of moral responsibility.”

That logic could also allow certain manufacturers of AI, like big corporations and governments, to defer any blame for the tech’s mistakes to the machinery itself, the authors said, which could be dangerous. Moving forward, they suggested we need to determine where we draw the line in considering robots as moral beings.

“Whether we are considering questions of moral responsibility or rights, issues of robot mortality may currently seem like science fiction,” Bigman and co-authors said. “However, we suggest that now, while machines and our intuitions about them are still in flux, is the best time to systematically explore questions of robot morality.”

""

After graduating from Indiana University-Bloomington with a bachelor’s in journalism, Anicka joined TriMed’s Chicago team in 2017 covering cardiology. Close to her heart is long-form journalism, Pilot G-2 pens, dark chocolate and her dog Harper Lee.

Trimed Popup
Trimed Popup