How to spot generative AI ‘hallucinations’ and prevent them

How to spot generative AI ‘hallucinations’ and prevent them

Generative AI can have “hallucinations” when it doesn’t know the answer to a question; here’s how to spot it.

Researchers from the University of Oxford have devised a new method to help users work out when generative AI could be “hallucinating.” This comes about when an AI system is posed a query that it doesn’t know the answer to, causing it to make up an incorrect answer.

Luckily, there are tips to both spot this when it’s happening and prevent it from happening altogether.

How to stop AI hallucinations

A new study by the team at the University of Oxford has produced a statistical model that can identify when questions asked of generative AI chatbots were most likely to produce an incorrect answer.

This is a real concern for generative AI models, as the advanced nature of how they communicate means they can pass off false information as fact. That was highlighted when ChatGPT went rogue with false answers back in February.

With more and more people from all walks of life turning to AI tools to help them with school, work, and daily life, AI experts like those involved in this study are calling for clearer ways for people to tell when AI is making up responses, especially when related to serious topics like healthcare and the law.

The researchers at the University of Oxford claim that their research can tell the difference between when a model is correct or just making something up.

“LLMs are highly capable of saying the same thing in many different ways, which can make it difficult to tell when they are certain about an answer and when they are literally just making something up,” said study author Dr Sebastian Farquhar while speaking to the Evening Standard. “With previous approaches, it wasn’t possible to tell the difference between a model being uncertain about what to say versus being uncertain about how to say it. But our new method overcomes this.”

However, there is of course still more work to do on ironing out the errors AI models can make.

“Semantic uncertainty helps with specific reliability problems, but this is only part of the story,” he added. “If an LLM makes consistent mistakes, this new method won’t catch that. The most dangerous failures of AI come when a system does something bad but is confident and systematic.

“There is still a lot of work to do.”

Featured image: Ideogram

The post “How to spot generative AI ‘hallucinations’ and prevent them” by Rachael Davies was published on 06/19/2024 by readwrite.com