TEHRAN - Following the debut of OpenAI’s ChatGPT - along with other similar models - the spotlight has turned toward large language models (LLMs), huge neural networks that have undergone training on extensive collections of text sourced from books and the internet.
Their training involves optimizing them to predict the subsequent token based on the preceding ones. Demonstrating an impressive capacity, these models can produce highly persuasive and human-like textual outputs, rendering them appropriate for a multitude of applications like writing assistance, text summarization, language translation, and more. Nonetheless, since these models undergo training and optimization to select the next most probable token from an extensive vocabulary, their focus is not necessarily on tokens that formulate factually accurate statements. This makes them susceptible to generating outputs that sound human-like and authentic - but which are not correct, established facts based on truth. This problem is commonly referred to as hallucination, and many people have likely encountered instances of it when LLMs generate non-existent URLs or other forms of pseudo evidence in response to user queries. This article will provide a more in-depth exploration of the applications and challenges posed by LLMs - especially in the field of medicine.
LLMs in medicine
LLMs are suitable in the field of medicine in many ways. ChatGPT even achieved passing scores on the United States Medical Licensing Examination, which is unprecedented for any other publicly available general-purpose LLM.1 Notably, responses generated by ChatGPT were preferred by patients over physician responses in both quality and empathy, as evaluated by licensed healthcare professionals.2
The hallucination problem of LLMs is especially challenging for medicalThe content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.