AI needs more diversity to avoid data bias

AI has the potential to disrupt the healthcare industry and improve healthcare outcomes of patients through faster diagnosis and more accurate, targeted treatment. But how AI algorithms are trained needs some improvement, according to Naga Rayapati, founder and CEO/CTO at online marketplace GoGetter, which penned an article for Forbes.

Namely, the data that AI systems are trained with needs to be more diverse to avoid bias in the algorithms. In healthcare, bias can inadvertently harm patients through discrimination.

“AI companies have a moral obligation to their customers, and to themselves, to actively address data bias,” Rayapati wrote.

Not addressing bias in the AI space could have detrimental impacts, including the possible rejection of the technology or sub-par products. In addition, bias could have legal implications in the future, according to Rayapati.

While the machine learning systems aren’t biased themselves, the data used to create algorithms can have built-in bias. For example, an AI system used in assisting sentencing guidelines recommended stricter guidelines disproportionately for minorities, Rayapati wrote. To ensure AI data is unbiased, the issue must be dealt with when data is collected or curated. Above all, the data must be diverse.

See the full story below:

Amy Baxter

Amy joined TriMed Media as a Senior Writer for HealthExec after covering home care for three years. When not writing about all things healthcare, she fulfills her lifelong dream of becoming a pirate by sailing in regattas and enjoying rum. Fun fact: she sailed 333 miles across Lake Michigan in the Chicago Yacht Club "Race to Mackinac."

Trimed Popup
Trimed Popup