The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
Can algorithms really learn empathy?
By Barry Eichengreen  |  Feb 22, 2023
Can algorithms really learn empathy?
Image courtesy of and under license from
Regardless of how history views ChatGPT, it has certainly succeeded at grabbing the spotlight and riveting the public gaze on AI and its current capabilities. Still, the brouhaha should not obscure ChatGPT’s developmental stage as a rudimentary tool that represents more of a beginning, with huge room for improvement.

STOCKHOLM - With hindsight, 2022 will be seen as the year when artificial intelligence (AI) gained street credibility. The release of ChatGPT by the San Francisco-based research laboratory OpenAI garnered great attention and raised even greater questions.

In just its first week, ChatGPT attracted more than a million users and was used to write computer programs, compose music, play games, and take the bar exam. Students discovered that it could write serviceable essays worthy of a B grade - as did teachers, albeit more slowly and to their considerable dismay.

ChatGPT is far from perfect, much as B-quality student essays are far from perfect. The information it provides is only as reliable as the information available to it, which comes from the internet. How it uses that information depends on its training, which involves supervised learning, or - put another way - questions asked and answered by humans.

The weights that ChatGPT attaches to its possible answers are derived from reinforcement learning, where humans rate the response. ChatGPT’s millions of users are asked to upvote or downvote the bot’s responses each time they ask a question. In the same way useful feedback from an instructor can sometimes teach B-quality students what they need to do to write an A-quality essay, it is possible that ChatGPT will eventually get better grades.

This rudimentary AI forces people to rethink what tasks can be carried out with minimal human intervention. If an AI is capable of passing the bar exam, then is there any reason it could not write a legal brief or give sound legal advice? Also, if an AI can pass a medical-licensin

The content herein is subject to copyright by Project Syndicate. All rights reserved. The content of the services is owned or licensed to The Yuan. The copying or storing of any content for anything other than personal use is expressly prohibited without prior written permission from The Yuan, or the copyright holder identified in the copyright notice contained in the content.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.