The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
The power of love
By Bart de Witte  |  Jul 22, 2022
The power of love
Image courtesy of and under license from Shutterstock.com
Most people cannot distinguish between an account operated by an actual human and one that is simply a chatbot. As algorithms and bots become more sophisticated, they influence how people think, emotions they express, and even their personalities. Such ‘sentient’ AI with the power to control emotions, actions, and opinions is potentially highly dangerous. More oversight is thus needed, but this should not come at the expense of openness or benefits to society.

BERLIN - "I know a human when I talk to one," said Blake Lemoine. An article published in the Washington Post on June 11th revealed that this software engineer, who works in Google's Responsible Artificial Intelligence (AI) organization, had made an astonishing claim: He believed that Google's chatbot LaMDA (Language Model for Dialogue Applications) was sentient. These claims have been widely dismissed by others working in the AI industry, but the current focus of this debate might not be the right one.

As the AI community continues its debate about whether AI can be sentient or not, one should consider a different question: If not even a Google engineer can tell the difference between a sentient and non-sentient AI system, how will the majority of Internet users be able to tell?

During the last 15 years, nerds backed with venture capital and neuroscientific knowledge were able to disrupt humans’ dopaminergic neurotransmissions by training algorithms designed for attraction, addiction, and instant gratification. Humans experience surges of dopamine for both our virtues and our vices, and the dopamine pathway has been particularly well-studied in research on addiction. One well-known fact, for example, is that Instagram's notification algorithms sometimes hold back ‘likes’ for photos to deliver them in larger bursts later. This means that when one creates a post, one may initially be disappointed when one gets fewer reactions than expected, only to get them in larger amounts later. Such negative initial results will prime dopamine centers to respond strongly to a sudden subsequent influx of social recognition. Using a variable reward schedule takes advantage of humans’ dopamine-driven desire for social affirmation and optimizes the balance between negative and positive feedback until one becomes a habitual user. The upshot of this is that the majority of

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Comments
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.