LJUBLJANA - There is nothing new about ‘chatbots’ that can maintain a conversation in natural language, understand a user’s basic intent, and offer responses based on preset rules and data. The difference now is that the capacity of such chatbots has been dramatically augmented in recent months, leading to much handwringing and panic in many circles.
Many experts talk about chatbots auguring the end of the traditional student essay. However, a more important issue that warrants closer attention is how chatbots should respond when human interlocutors use aggressive, sexist, or racist remarks to prompt the bot to present its own foul-mouthed fantasies in return. Should artificial intelligence (AI) applications be programmed to answer at the same level as the questions posed to them?
If the world decides that some kind of regulation is in order, the next step is to determine how far any censorship should go. Will political positions that some cohorts deem ‘offensive’ be prohibited? What about expressions of solidarity with West Bank Palestinians, or the claim that Israel is an apartheid state (which former United States President Jimmy Carter once put into the title of a book)? Will these be blocked for being ‘anti-Semitic’?
The problem does not end there. As artist and writer James Bridle warns, the new AIs are “based on the wholesale appropriation of existing culture,” and the belief that they are “actually knowledgeable or meaningful is actively dangerous.” Hence, one must be very wary of new AI image generators. “In their attempt to understand and replicate the entiretyThe content herein is subject to copyright by Project Syndicate. All rights reserved. The content of the services is owned or licensed to The Yuan. The copying or storing of any content for anything other than personal use is expressly prohibited without prior written permission from The Yuan, or the copyright holder identified in the copyright notice contained in the content.