MOUNTAIN VIEW, CALIFORNIA - On March 2, when asked about hallucinations in large language models (LLMs), Google co-founder Sergey Brin asserted that “it’s a problem right now, no question about it. We have made them hallucinate less and less over time, but I would definitely be excited to see a breakthrough that brings it to near zero. You know, we can’t just count on breakthroughs, I think we’re going to keep doing the incremental kinds of things we do, to just like bring all the hallucinations down, down, down, over time. Like I say, a breakthrough would be good.”
Brin is right, and I have been working on the hallucination problem ever since. The first step toward tackling this is for the LLM to detect when it is hallucinating. This sounds straightforward enough, but is extremely difficult to do, because - in the words of Tesla’s former Director of AI Andrej Karpathy - “In some sense, hallucination is all LLMs do. We direct their dreams with prompts. The prompts start the dream, and based on the LLM's hazy recollection of its training documents, most of the time the result goes someplace useful. It's only when the dreams go into deemed factually incorrect territory that we label it a 'hal
The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.