The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
Persuasive AI threatens society, democracy - even humanity itself
By Terence Tse, Mark Esposito, Joshua Entsminger  |  Feb 20, 2024
Persuasive AI threatens society, democracy - even humanity itself
Image courtesy of and under license from
What is needed to change a person’s mind? As GenAI becomes more embedded in customer-facing systems such as human-like phone calls or online chatbots, this ethical question must be widely addressed - especially given AI’s tendency to conduce the spread of disinformation.

LONDON - The capacity to change minds through reasoned discourse lies at the heart of democracy. Clear and effective communication forms the foundation of deliberation and persuasion, which are essential for resolving competing interests. Unfortunately, there is also a dark side to persuasion - false motives, lies, cognitive manipulation, and other malicious behavior that artificial intelligence (AI) at times facilitates.

In the not-so-distant future, generative AI (GenAI) will likely enable the creation of new user interfaces designed to persuade people on behalf of any person or entity with the means to establish such a system. By leveraging private knowledge bases, these specialized models offer differential truths that compete based on their ability to generate convincing responses for a target group - i.e., an AI for each ideology. A wave of AI-assisted social engineering would surely follow, with escalating competition making it easier and cheaper for bad actors to spread disinformation and perpetrate scams.

The emergence of GenAI has thus fueled a crisis of epistemic insecurity. The initial policy response has been to ensure that humans know they are engaging with an AI. Last June, the European Commission urged large tech companies to start labeling text, video, and audio created or manipulated by AI tools, and the European Parliament is now pushing for a similar rule in the forthcoming AI Act. This awareness, the argument goes, will prevent artificial agents - no matter how convincing they may be - from misleading people.

Alerting people to the presence of AI is a good start, but still not enough to safeguard them against manipulation. As ear

The content herein is subject to copyright by Project Syndicate. All rights reserved. The content of the services is owned or licensed to The Yuan. The copying or storing of any content for anything other than personal use is expressly prohibited without prior written permission from The Yuan, or the copyright holder identified in the copyright notice contained in the content.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.