The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
The Complex Issue of Fairness in AI(Part I)
By Ivana Bartoletti  |  Sep 29, 2021
The Complex Issue of Fairness in AI(Part I)
Image courtesy of and under license from
Artificial intelligence bias takes several forms. Ivana Bartoletti investigates the cause and effect of AI bias, and tackles what needs to be done to create trust in AI systems.

LONDON, ENGLAND - Artificial intelligence (AI) bias is rarely out of the spotlight as the sector continues to hurtle down the scientific highway, breaking new ground and changing the way people live their lives on an almost daily basis.

That existing biases are hardwired into algorithms replacing humans in the administrative and health and recruitment sectors, is information that often steals the headlines. A recent study published in Science concluded that an algorithm used in the United States healthcare system was more likely to refer sick white patients to medical programs than equally sick Black patients makes it easy to see why.

Amazon had tried to use AI to build a resume-screening tool by using resumes the company had collected over the past decade, Reuters reported in 2018. However, these resumes tended to be from men, which meant the system discriminated against women. In 2019, the Apple-branded credit card came under intense scrutiny because women were receiving less credit than their spouses with the same income and credit score.

Company boards are talking about these issues for several reasons, but first of all, to safeguard their reputations. As algorithmic bias becomes headline news, prioritizing fairness in AI means preserving trust in their brands.  Firms that do not do so end up under intense scrutiny from campaigners and regulators, e.g., American Express whipped up a storm of controversy in 2009 when it notified some customers that their credit limit was being cut because an algorithm suggested they would fall behind on payments. The New York Times made it headline news, and Amex was forced to concede it would no longer correlate stores to risk.

Reputation is not the only issue, however. Algorithmic bias can lead to prediction inaccuracies and therefore to wrong decisions. A hiring algorithm that discriminates against female applicants ends up rejecting p

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.