The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
AI will never become AGI because it will never become wholly human
By Gil Press  |  Jun 28, 2023
AI will never become AGI because it will never become wholly human
Image courtesy of and under license from Shutterstock.com

Day 8

Day eight brings The Yuan’s expedition of intelligent exploration back to the shores of Massachusetts Bay, land of MIT, the pulsating tech hub of Boston, and the Native American nation of the Wampanoag - ‘People of the First Light’ - whose generous humanity saved the Pilgrims from starvation during their first desperate winter (1621) amid the harsh, unfamiliar conditions of the New World. Gil Press, managing partner at marketing, publishing, research, and education consultancy gPress and a noted AI commentator, sits in the pilot seat to propound his thesis that its very humanity will allow humanity to stay a hop, skip and a jump ahead of AGI, or any other sentient AI since, even in the unlikely event it does ultimately arise, it will be but a pale, artificial shadow of its creator.


Shifeng Wang
Chief Editor, The Yuan

BELMONT, MASSACHUSETTS - There is a striking agreement among all the participants in this debate about the benefits and dangers and the future of artificial intelligence (AI). Their common assumption is that AI will reach its full potential or full destructive power when it becomes artificial general intelligence (AGI).

Given humans’ deficient and often muddled intelligence, a clear and consistent definition of AGI is almost always missing from discussions of how it is going to destroy humanity or how it is going to cure cancer. Still, OpenAI, the developer of ChatGPT, has regarded AGI as its ultimate goal ever since its founding in 2015 and Sam Altman, its chief executive, recently defined it as “AI systems that are generally smarter than humans.”

A more nuanced definition was provided by the editors of a 2007 book that claimed to have coined the term AGI: “AI systems that possess a reasonable degree of self-understanding and autonomous self-control, and have the ability to solve a variety of complex problems in a variety of contexts, and to learn to solve new problems that they didn’t know about at the time of their creation.”1 Other AI researchers and observers include in AGI human attributes such as sentience, consciousness, and intrinsic motivation.

Amid all the talk about AGI, the general assumption is that it is going to be a rational, objective machine, free of typical human bias, emotional judgement, and irrational lapses. This perfect decision-making, planning and execution machine is expected to correct all human foibles and deficiencies, and in an ideal world it will be an artificial general - and very much improved - intelligence. The reality, however, is far from ideal, which means that a more likely scenario is that AI systems will continue to perform as well or better than humans in very specific tasks - as computers have been doing for the last 75 years - and will add new tasks to their expandi

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Comments
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.