The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
AI development pause risks throwing the baby out with the bathwater
By Ricardo Vinuesa  |  Aug 11, 2023
AI development pause risks throwing the baby out with the bathwater
Image courtesy of and under license from Shutterstock.com
The public debate over AI constantly changes in response to AI’s own shifts. Recent advances in GAI and LLMs have prompted calls for a timeout in AI development. Prof Ricardo Vinuesa of Sweden’s KTH Royal Institute of Technology argues such a moratorium would be disastrous.

STOCKHOLM - Recent developments in generative artificial intelligence (AI) - especially in the context of large language models (LLMs) - have had an important impact on the public debate over the past few months. OpenAI has been at the forefront of this latest development, showcasing the performance of its LLM generative pre-trained transformer GPT-4, which exhibits excellent results in a wide range of tasks.Perhaps the most interesting aspect of this debate lies in the fact that the text produced by current LLMs is of very high quality, closely resembling human language. 

This may give the impression that such LLMs have reached some sort of understanding and insight into the topic being discussed, which has triggered the imagination of the public - and perhaps led to a certain alarmism focused on the wrong aspects. At the same time, the Future of Life Institute initiated an open letter - which has been signed by many scientists and practitioners - calling for a six-month pause on AI experiments, and more precisely a halt in training AI systems that are "more powerful than GPT-4." This may be well-intentioned, but it is also a loose definition that is difficult to quantify and implement.2

As recently argued by Baum et al. (2023), the main problem with the debate in its current form is that it centers around artificial general intelligence (AGI).This concept relates to a super-intelligence capable of outperforming humans in a very wide range of tasks, as opposed to current AI systems which have narrow intelligence, i.e., they can outperform humans in very concrete tasks. There is currently no clear or obvious path towards such an AGI, since current LLMs only give an appearance of understanding, while actually exhibiting only shallow insight into the data and text being produced. However, such limitations are not the only risks associated with current AI systems, and in particula

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Comments
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.