GENEVA - The race to apply artificial intelligence (AI) to nuclear-weapons systems is no longer science fiction, it is actively underway - a development that could make nuclear war more likely. Governments worldwide are acting to ensure the safe development and application of AI - so there is an opportunity to mitigate this danger - but if world leaders are to seize it, they must first recognize the seriousness of the threat.
In recent weeks, the G7 - the Group of Seven intergovernmental forum comprising the United States, United Kingdom, Canada, Germany, France, Italy, and Japan - agreed on the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems to “promote safe, secure, and trustworthy AI worldwide,” and US President Joe Biden issued an executive order establishing new standards for AI “safety and security.” The UK also hosted the first-ever global AI Safety Summit with the goal of ensuring the tech is developed in a “safe and responsible” manner.
Unfortunately, none of these initiatives adequately addresses the risks posed by the application of AI to nuclear weapons. Both the G7 code of conduct and Biden’s executive order refer only in passing to the need to protect populations from AI-generated chemical, biological, and nuclear threats. UK Prime Minister Rishi Sunak also failed to mention the acute threat posed by nuclear-weapons-related AI
The content herein is subject to copyright by Project Syndicate. All rights reserved. The content of the services is owned or licensed to The Yuan. The copying or storing of any content for anything other than personal use is expressly prohibited without prior written permission from The Yuan, or the copyright holder identified in the copyright notice contained in the content.