The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
Contrary to popular belief, AI does actually work, is generally safe
By Patrick Glauner  |  Jan 02, 2024
Contrary to popular belief, AI does actually work, is generally safe
Image courtesy of and under license from Shutterstock.com
Some fears and misgivings over AI are justified, but many are blown out of all proportion, and thus an accurate understanding of its true risk is elusive in the minds of most. AI expert Prof Patrick Glauner of Germany’s Deggendorf Institute of Technology sets the record straight.

DEGGENDORF, GERMANY - Recent political narratives and fearmongering may argue otherwise, but a closer look reveals that artificial intelligence (AI) as such is in fact safe and largely does work. 

AI models certainly make mistakes, but so do humans. One should apply the same standards to both and bid goodbye to double standards that try to keep shedding a singularly negative light on AI. This article will explain why AI is safe and does work, discuss methodology that makes it safer, more reliable and more trustworthy, and present contemporary challenges and make recommendations to political decision-makers.


Does existential harm come from AI itself, or other underlying causes?

The AI Safety Summit 2023 brought together various stakeholders from government, industry, civil society, and others at Bletchley Park in the United Kingdom in early November. While various issues raised there are legit, claims that AI is the actual cause of most of those questions seem dubious. United States Vice President Kamala Harris warned the summit that "a senior kicked off his healthcare plan because of a faulty algorithm" would imply that AI poses an existential risk for that person.1 While this is obviously true and not a desirable outcome, it distracts from the actual underlying cause of problems of that sort. After all, there seem to be far fewer complaints when geriatrics are kicked off their healthcare plans by humans - something which happens with far greater frequency. 

Examples like these show no need is necessarily present to regulate all issues in an AI-specific way. Instead, the US would be better served by focusing on the true source of the problem and tweaking its healthcare system accordingly so the vulnerable cannot be kicked off at all. Such an outcome would be (nearly) impossible in a country like Germany, and t

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Comments
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.
n
nigelmc
2024-01-05
"is generally safe" Isn't that a damning criticism? Should we accept a "move fast and break things" or "fail fast" in real world environments using a technology that even its designers and proponents admit they do not fully understand? Or should we err on the side of prudence and caution, ensuring that everything is properly AND FULLY tested before it is released? There appears to be a porous barrier between research and public availability. We would not do it with a bacterium. Why do we do it with software? Would we release a virus into the world to see what happens? Why do we assume that complex software is any less dangerous? Yes, mostly safe. But as the growing problems with supposedly autonomous cars are demonstrating, "mostly" is not good enough unless we want to live in a world of Ford Pintos where software companies decide how many deaths not enough to justify a re-write or cancellation of a project.
Reply