The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
AI’s 80-year history offers tantalizing glimpses of its future course
By Gil Press  |  Feb 12, 2024
AI’s 80-year history offers tantalizing glimpses of its future course
Image courtesy of and under license from Shutterstock.com
AI has come a long way since its humble beginnings in the 1940s, and a look back at this history is both fascinating and informative as an indicator of how the tech might continue to evolve, notes Gil Press, managing partner at the consultancy gPress and a noted AI commentator.

BELMONT, MASSACHUSETTS - When ChatGPT became the fastest-growing consumer internet application in history at the end of 2022, it marked a new stage in the worldwide reach of computer programs driven by sophisticated algorithms, large volumes of data, and tremendous processing power. This was also a new high point in the history of artificial intelligence (AI), punctuated by funding peaks and valleys, marked by rival approaches to research and development, and expressed in public fascination, anxiety, and excitement.

Everything started over 80 years ago. 

In April 1943, John Mauchly and J. Presper Eckert of the Moore School, University of Pennsylvania, submitted a proposal to the United States Army’s Ballistics Research Laboratory to build an ‘electronic calculator.’ The result was ENIAC [Electronic Numerical Integrator and Computer], the first-ever electronic general-purpose computer, unveiled to the public in February 1946. 

In December 1943, neurophysiologist Warren S. McCulloch and logician Walter Pitts published ‘A logical calculus of the ideas immanent in nervous activity,’ in which they discussed networks of idealized and simplified neurons and how these could perform simple logical functions. The paper and its description of the functioning of nerve cells in mathematical terms became the inspiration for the development of computer-based ‘artificial neural networks’ and their popular description today as ‘mimicking the brain.’ 

The idea that computer hardware and software are similar to humans’ brains and minds became widespread early on. The general public referred to the very first modern computers as “thinking machines,” following their description by some of their developers. 

In one the of the first popular introductions to modern computing - the 1949 book Giant Brains or Machines That Think - computer software developer Edmund Berkeley anticipated future AI develop

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Comments
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.