The Complex Issue of Fairness in AI(Part I)
By Ivana Bartoletti  |  Sep 29, 2021
The Complex Issue of Fairness in AI(Part I)
Image courtesy of and under license from Shutterstock.com
Artificial intelligence bias takes several forms. Ivana Bartoletti investigates the cause and effect of AI bias, and tackles what needs to be done to create trust in AI systems.

LONDON, ENGLAND - Artificial intelligence (AI) bias is rarely out of the spotlight as the sector continues to hurtle down the scientific highway, breaking new ground and changing the way people live their lives on an almost daily basis.

That existing biases are hardwired into algorithms replacing humans in the administrative and health and recruitment sectors, is information that often steals the headlines. A recent study published in Science concluded that an algorithm used in the United States healthcare system was more likely to refer sick white patients to medical programs than equally sick Black patients makes it easy to see why.

Amazon had tried to use AI to build a resume-screening tool by using resumes the company had collected over the past decade, Reuters reported in 2018. However, these resumes tended to be from men, which meant the system discriminated against women. In 2019, the Apple-branded credit card came under intense scrutiny because women were receiving less credit than their spouses with the same income and credit score.

Company boards are talking about these issues for several reasons, but first of all, to safeguard their reputations. As algorithmic bias becomes headline news, prioritizing fairness in AI means preserving trust in their brands.  Firms that do not do so end up under intense scrutiny from campaigners and regulators, e.g., American Express whipped up a storm of controversy in 2009 when it notified some customers that their credit limit was being cut because an algorithm suggested they would fall behind on payments. The New York Times made it headline news, and Amex was forced to concede it would no longer correlate stores to risk.

Reputation is not the only issue, however. Algorithmic bias can lead to prediction inaccuracies and therefore to wrong decisions. A hiring algorithm that discriminates against female applicants ends up rejecting p

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. The copying or storing of any content for anything other than personal use is expressly prohibited without prior written permission from The Yuan, or the copyright holder identified in the copyright notice contained in the content.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google