The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
The Missing Link in Europe's AI Strategy
By Aida Ponce Del Castillo  |  Sep 14, 2021
The Missing Link in Europe's AI Strategy
Image courtesy of and under license from Shutterstock.com
The EU must engage with its citizens and protect its workers to build public trust in AI and become a global leader in the field, but these considerations take a back seat to the need to form the proposed single market for data, Aida Ponce Del Castillo asserts. Find out why the second one does not even figure in the plan.

BRUSSELS - Regulation of AI systems should not be based on their providers’ self-assessment. Europe can become a global leader in the field and foster genuine public trust in and acceptance of this emerging technology, but only if it effectively protects and involves its citizens and workers.

The strategy of the European Commission (EC) for artificial intelligence (AI) focuses on the need to establish “trust” and “excellence.” Recently proposed AI regulation, the EC argues, will create trust in this new technology by addressing its risks, while excellence will follow from EU member states’ investment and innovation. With these two factors accounted for, Europe’s AI uptake will supposedly accelerate, per the blueprint.

Unfortunately, protecting EU citizens’ fundamental rights, which should be the AI regulation’s core objective, appears to be a secondary consideration, and protections of workers’ rights seem not to have been considered at all. AI is the flagship of Europe’s digital agenda, and the Commission’s legislative package is fundamental to the proposed single market for data. The draft regulation establishes rules concerning the introduction, implementation, and use of AI systems. It adopts a risk-based approach, with unacceptable, high-risk, limited, and low-risk uses.

Under the proposal, AI systems deemed ‘high-risk’ – those posing significant perils to the health and safety or fundamental rights of persons – are subject to an ex-ante conformity assessment to be carried out by the provider, without prior validation by a competent external authority. Requirements include high-quality data sets, sound data governance and management practices, extensive record-keeping, adequate risk management, detailed technical documentation, transparent user instructions, appropriate human oversight, explainable results, and a high level of accuracy, robustness, and cybersecurity.

The Commissi

The content herein is subject to copyright by Project Syndicate. All rights reserved. The content of the services is owned or licensed to The Yuan. The copying or storing of any content for anything other than personal use is expressly prohibited without prior written permission from The Yuan, or the copyright holder identified in the copyright notice contained in the content.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Comments
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.