The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
The Complex Issue of Fairness in AI(Part II)
By Ivana Bartoletti  |  Nov 05, 2021
The Complex Issue of Fairness in AI(Part II)
Image courtesy of and under license from Shutterstock.com
Ivana Bartoletti poses the question ‘What is fairness?’ as she continues to investigate the complex concept of artificial intelligence bias based on race, sex, and the economy, and proposes possible courses of action to build trust and address these issues.

LONDON – For an algorithm to pass one fairness test but fail another presents complex issues, among them that definitions of fairness vary. The concept of equal opportunity is also subject to divergent interpretations, from equalizing a starting point, to differentiating it based on checkered individual backgrounds.

Different underlying ethical assumptions then wind up in clashing mathematical, ethical, and computational definitions.

In one now notorious case, COMPAS, an algorithm used to forecast future criminal behavior, was under scrutiny because Black offenders were found to be twice as likely to be repeat offenders than white ones. This was, of course, incorrect. Northpointe, the company that created the algorithm maintains that it is non-discriminatory because the rate of accuracy for its scores is identical for Black and white defendants. While both perspectives sound fair, they are based on different perceptions of what fairness means, and it is mathematically impossible to meet both objectives at the same time.

“One of the most compelling uses for AI-powered algorithms is to eliminate the biases that infect human decision-making… But… algorithmic decisions still produce biased and discriminatory outcomes,” Rebecca Kelly Slaughter, commissioner of the US Federal Trade Commission (FTC), said in a speech last year, adding, “We have seen mounting evidence of AI-generated economic harms in employment, credit, healthcare, and housing.”

The choice of how to interpret fairness speaks to the values of an organization and one might urge its full articulation per accountability requirements under privacy law and corporate responsibility and ethics. In other words, selecting a fairness metric is a contextually nuanced decision, and shaping a blanket policy may thus be too difficult because of decisions over which fairness metrics to apply. With no incentive to consider fairness, model developers are apt to overl

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Comments
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.