The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
Medical field takes first steps toward tackling ‘hallucination’ by LLMs
By Moein Shariatnia  |  Nov 10, 2023
Medical field takes first steps toward tackling ‘hallucination’ by LLMs
Image courtesy of and under license from Shutterstock.com
‘Hallucination,’ which refers to LLMs and other AIs presenting falsehoods as seemingly plausible facts, is a grave problem and one yet to be resolved. This becomes especially urgent when patients’ lives are at stake, although hope is at hand, writes ML developer Moein Shariatnia.

TEHRAN - Following the debut of OpenAI’s ChatGPT - along with other similar models - the spotlight has turned toward large language models (LLMs), huge neural networks that have undergone training on extensive collections of text sourced from books and the internet. 

Their training involves optimizing them to predict the subsequent token based on the preceding ones. Demonstrating an impressive capacity, these models can produce highly persuasive and human-like textual outputs, rendering them appropriate for a multitude of applications like writing assistance, text summarization, language translation, and more. Nonetheless, since these models undergo training and optimization to select the next most probable token from an extensive vocabulary, their focus is not necessarily on tokens that formulate factually accurate statements. This makes them susceptible to generating outputs that sound human-like and authentic - but which are not correct, established facts based on truth. This problem is commonly referred to as hallucination, and many people have likely encountered instances of it when LLMs generate non-existent URLs or other forms of pseudo evidence in response to user queries. This article will provide a more in-depth exploration of the applications and challenges posed by LLMs - especially in the field of medicine.


LLMs in medicine

LLMs are suitable in the field of medicine in many ways. ChatGPT even achieved passing scores on the United States Medical Licensing Examination, which is unprecedented for any other publicly available general-purpose LLM.Notably, responses generated by ChatGPT were preferred by patients over physician responses in both quality and empathy, as evaluated by licensed healthcare professionals.

The hallucination problem of LLMs is especially challenging for medical

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Comments
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.