The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
A forecaster’s foray into AI safety
By Tolga Bilge  |  Nov 30, 2023
A forecaster’s foray into AI safety
Image courtesy of and under license from Shutterstock.com
The unchecked advancement of AI systems is a ticking time bomb. It’s time to regulate, writes superforecaster Tolga Bilge.

BERGEN, NORWAY - My background is that of a superforecaster, where I have excelled in estimating the probabilities of various future events, from geopolitical shifts to disease outbreaks. As I delved further into this field, the more I became concerned about our long-term future, particularly the potentially catastrophic impacts of advanced AI systems. Now, I find myself immersed in a critical dialogue on AI safety and governance, a discourse that could shape the future of our planet.

An important observation is that ensuring AI is safe and benefits everyone is as much a geopolitical problem as it is a technical one. To avert a reckless race in AI development, where safety best-practices are thrown by the wayside, effective regulations that can command wide international support need to be developed. It is essential for nations to come together on this shared interest, just as the Non-Proliferation Treaty once united opposing superpowers.

Earlier this year, motivated by my concern over the dangers of the unchecked advancement of AI systems, and in despair over the lack of serious governance proposals to manage these risks, I authored a first attempt at a concrete treaty blueprint focused on mitigating these risks, known as the Treaty on Artificial Intelligence Safety and Cooperation. Although an imperfect draft, this was received positively by many in the AI governance field.

With this blueprint, I turned to consider how the policy discourse could be shifted toward concrete and effective measures aimed at reducing AI risks. Observing the success of the Future of Life Institute’s open letter calling for a pause on giant AI experiments in widening the debate, I built a small team, including my fellow forecaster Eli Lifland, and we penned a new open letter.

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Comments
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.