BERLIN - In 2020, Berlin-based non-government organization (NGO) AlgorithmWatch launched a project to monitor Meta’s Instagram newsfeed algorithm. Only by understanding how society is affected by platforms' algorithmic decisions can we take action to ensure they do not undermine our autonomy, freedom, and the common good.
Volunteers could install a browser add-on that scraped their Instagram newsfeeds. Data was sent to a database we at the non-profit HIPPO AI Foundation used to study how Instagram prioritizes pictures and videos in a user’s timeline. The research team wanted to understand from the collected data which images and videos were favored by the algorithms of Meta, formerly Facebook.
Understanding the reasoning of algorithmic decisions is a fundamental difficulty in the implementation of algorithms. The so-called Black Boxes make this tough. The Black Box issue within the artificial intelligence (AI) industry mostly refers to the technical Black Box. The popularity of the technical Black Box has driven further research and development of explainable AI. The field of explainable AI is concerned with how AI models arrive at their results, with the goal of developing AI that can be traced back to its origin.
But as the case of AlgorithmWatch versus Meta demonstrates, the major issue of understanding algorithmic decision-making might not be the technical Black Box, but what is being described in literature as the legal Black Box. The term legal Black Box refers to the opacity that arises from the fact that it is proprietary software. To puThe content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. The copying or storing of any content for anything other than personal use is expressly prohibited without prior written permission from The Yuan, or the copyright holder identified in the copyright notice contained in the content.