


STOCKHOLM - With hindsight, 2022 will be seen as the year when artificial intelligence (AI) gained street credibility. The release of ChatGPT by the San Francisco-based research laboratory OpenAI garnered great attention and raised even greater questions.
In just its first week, ChatGPT attracted more than a million users and was used to write computer programs, compose music, play games, and take the bar exam. Students discovered that it could write serviceable essays worthy of a B grade - as did teachers, albeit more slowly and to their considerable dismay.
ChatGPT is far from perfect, much as B-quality student essays are far from perfect. The information it provides is only as reliable as the information available to it, which comes from the internet. How it uses that information depends on its training, which involves supervised learning, or - put another way - questions asked and answered by humans.
The weights that ChatGPT attaches to its possible answers are derived from reinforcement learning, where humans rate the response. ChatGPT’s millions of users are asked to upvote or downvote the bot’s responses each time they ask a question. In the same way useful feedback from an instructor can sometimes teach B-quality students what they need to do to write an A-quality essay, it is possible that ChatGPT will eventually get better grades.
This rudimentary AI forces people to rethink what tasks can be carried out with minimal human intervention. If an AI is capable of passing the bar exam, then is there any reason it could not write a legal brief or give sound legal advice? Also, if an AI can pass a medical-licensin
The content herein is subject to copyright by Project Syndicate. All rights reserved. The content of the services is owned or licensed to The Yuan. The copying or storing of any content for anything other than personal use is expressly prohibited without prior written permission from The Yuan, or the copyright holder identified in the copyright notice contained in the content.
