The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
Does AI development need to hit the brakes or the gas pedal?
By Simone Castello  |  Oct 16, 2023
Does AI development need to hit the brakes or the gas pedal?
Image courtesy of and under license from
Many leading experts argue AI is developing too quickly and that a pause is needed, while others vigorously urge the contrary. Guardrails are needed either way, asserts Simone Castello, a former BBC journalist who is now Science Communicator at Cambridge University.

CAMBRIDGE, UK - Generative AI could add USD7 trillion to the global economy over the next 10 years, research by Goldman Sachs shows. Governments, businesses, academics, and individuals are all experimenting with artificial intelligence (AI) platforms. However, the speed of development and lack of understanding about the capabilities of large language models (LLMs) have led some experts to express grave concerns about the existential threat AI poses to humanity. If the technology becomes more powerful, it could generate deeper societal-scale disruptions unless slowed down and regulated.

Responsible AI advocates are calling for safeguards, standards, and regulatory approaches without stifling innovation in the process.

AI development: Fast and furious, or too slow?

Industry leaders and scientists all worry about the ramifications of AI’s continued progress for humanity’s future. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” San Francisco-based nonprofit the Center for AI Safety said in a statement.

Signatories to the statement - which called for a pause in the development of AI and related tech - include such eminent academics as University of California, Berkeley Computer Science Prof Dawn Song, Stanford University Electrical Engineering Prof Emeritus Martin Hellman, Prof Ya-Qin Zhang, dean of AI Research at Tsinghua University in Beijing, and Prof Yi Zeng, Director of Brain-Inspired Cognitive AI Lab at the Chinese Academy of Sciences’ Institute of Automation.

Notable industry figures such as Demis Hassabis, chief executive of Google DeepMind, OpenAI’s head Sam Altman, and Anthropic’s chief Dario Amodei and public figures in non-governmental organizations and governments also signed the statement.

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.