The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
Artificial idiocy is a serious problem that exacerbates existing issues
By Slavoj Žižek  |  May 01, 2023
Artificial idiocy is a serious problem that exacerbates existing issues
Image courtesy of and under license from
Chatbots are not a new phenomenon, but as ChatGPT shows, they are now capable of far more than they used to be. Rules and oversight are seemingly necessary but are also no silver bullet, and the world still must decide how much regulation is too much.

LJUBLJANA - There is nothing new about ‘chatbots’ that can maintain a conversation in natural language, understand a user’s basic intent, and offer responses based on preset rules and data. The difference now is that the capacity of such chatbots has been dramatically augmented in recent months, leading to much handwringing and panic in many circles.

Many experts talk about chatbots auguring the end of the traditional student essay. However, a more important issue that warrants closer attention is how chatbots should respond when human interlocutors use aggressive, sexist, or racist remarks to prompt the bot to present its own foul-mouthed fantasies in return. Should artificial intelligence (AI) applications be programmed to answer at the same level as the questions posed to them?

If the world decides that some kind of regulation is in order, the next step is to determine how far any censorship should go. Will political positions that some cohorts deem ‘offensive’ be prohibited? What about expressions of solidarity with West Bank Palestinians, or the claim that Israel is an apartheid state (which former United States President Jimmy Carter once put into the title of a book)? Will these be blocked for being ‘anti-Semitic’?

The problem does not end there. As artist and writer James Bridle warns, the new AIs are “based on the wholesale appropriation of existing culture,” and the belief that they are “actually knowledgeable or meaningful is actively dangerous.” Hence, one must be very wary of new AI image generators. “In their attempt to understand and replicate the entirety

The content herein is subject to copyright by Project Syndicate. All rights reserved. The content of the services is owned or licensed to The Yuan. The copying or storing of any content for anything other than personal use is expressly prohibited without prior written permission from The Yuan, or the copyright holder identified in the copyright notice contained in the content.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.