Overcoming the limitations of large language models
By Janna Lipenkova  |  Feb 24, 2023
Overcoming the limitations of large language models
Image courtesy of and under license from Shutterstock.com
Large language models, or LLMs, have garnered a lot of attention lately, but many exciting developments and applications are yet to come. In this article, Janna Lipenkova contrasts the learning process of LLMs with human learning, discusses the gaps and presents a range of methods to augment and enhance LLMs with human-like cognitive capabilities.


How popular LLMs score along human cognitive skills (Source: semantic embedding analysis of ca. 400k AI-related online texts since 2021)
Disclaimers: This article was written without the support of ChatGPT. Also, all images are by the author unless otherwise noted.

In the last couple of years, large language models (LLMs) such as ChatGPT, T5 and LaMDA have demonstrated an amazing ability to produce human-like language. Casual observers are quick to attribute intelligence to models and algorithms, but how much of this is emulation, and how much is actually reminiscent of the rich language capability of humans? 

When confronted with the natural-sounding, confident outputs of these models, it is sometimes easy to forget that language per se is only the tip of the communication iceberg. Its full power unfolds in combination with a wide range of complex cognitive skills relating to perception, reasoning, and communication. While humans acquire these skills naturally from the surrounding world as they grow, the learning inputs and signals for LLMs are rather meager. They are forced to learn only from the surface form of language, and their success criterion is not communicative efficiency but the reproduction of high-probability linguistic patterns. In the business context, giving too much power to an LLM can lead to bad surprises. Facing its own limitations, an LLM will not admit to them. Instead, it will gravitate to the other extreme - producing nonsense, toxic content, or even dangerous advice, and with a high level of confidence. A medical virtual assistant driven by GPT-3 might, at a certain point in the conversation, even advise its users to kill themselves.

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. The copying or storing of any content for anything other than personal use is expressly prohibited without prior written permission from The Yuan, or the copyright holder identified in the copyright notice contained in the content.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.