The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
Preventing an AI Apocalypse
By Seth Baum  |  Jun 30, 2021
Preventing an AI Apocalypse
Image courtesy of and under license from Shutterstock.com
The apocalyptic view is that artificial intelligence-driven machines will outsmart humanity, take over the world, and kill us all. While many dismiss this possibility, it still needs to be taken seriously. People developing AI applications need to think about the implications of their work and be exposed to outside perspectives. Action needs to be taken now to minimize the risk of a catastrophe down the road.

NEW YORK - Recent advances in artificial intelligence (AI) have been nothing short of dramatic. AI is transforming nearly every sector of society, from transport to medicine to defense. Thus, what will happen when it becomes even more advanced than it already is worth considering.

The apocalyptic view is that AI-driven machines will outsmart humanity, take over the world, and kill us all. This scenario crops up often in science fiction, and it is easy enough to dismiss, given that humans remain firmly in control. Yet many AI experts take the apocalyptic perspective seriously, and are right to do so. The rest of society should as well.

To understand what is at stake, consider the distinction between narrow AI and artificial general intelligence (AGI). Narrow AI can operate only in one or a few domains at a time, so while it may outperform humans in select tasks, it remains under human control.

AGI, by contrast, can reason across a wide range of domains, and thus could replicate many human intellectual skills, while retaining all of the advantages of computers, such as perfect memory recall. Run on sophisticated computer hardware, AGI could outpace human cognition. To conceive an upper limit for how advanced AGI could become is difficult.

As it stands, most AI is narrow. Even the most advanced current systems have only limited amounts of generality, e.g., while Google DeepMind’s AlphaZero system was able to master Go, chess, and shogi - making it more general than most other AI systems, which can be applied only to a single specific activity - it has still demonstrated capability only within the limited confines of certain highly structured board games.

Many knowledgeable people dismiss the prospect of advanced AGI. Some, such as Selmer Bringsjord of USRensselaer

The content herein is subject to copyright by Project Syndicate. All rights reserved. The content of the services is owned or licensed to The Yuan. The copying or storing of any content for anything other than personal use is expressly prohibited without prior written permission from The Yuan, or the copyright holder identified in the copyright notice contained in the content.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Comments
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.