3 ways to solve the bias problem in AI

As AI systems leave laboratories and are implemented in real world settings, bias will continue to be “an increasingly widespread problem.” So how can it be solved? 

Researchers at IBM are currently working on automated bias detection algorithms to combat the problem, but the solution may not just be AI itself. The problem is likely deeper, according to a report published in Forbes. Societal bias may the actual problem.

Across the healthcare space, bias is a well-documented issue in medicine. 

Artificial Intelligence Lead at Accenture, Rumman Chowdhury, PhD, noted societal bias could still put a wrench in situations where data and algorithms are clean. She listed three specific steps organizations can implement to minimize the impact of societal biases.

  1. Look at the algorithms and ensure that they are not coded in a way that extends bias.
  2. Look at if AI can help alleviate the risk of biased data—similar to what IBM is trying to accomplish.
  3. Regulate AI and design the proper parameters for AI to operate within. Teach algorithms what data is valid to learn from that is valuable and ethical. 

To read the story, click the link below.