Original title: Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus
Authors: Tianhang Zhang, Lin Qiu, Qipeng Guo, Cheng Deng, Yue Zhang, Zheng Zhang, Chenghu Zhou, Xinbing Wang, Luoyi Fu
In this article, researchers tackle a challenge with Large Language Models (LLMs): their tendency to generate unreliable content, termed ‘hallucinations.’ Existing detection methods for these errors often rely on costly procedures like reference retrieval or multiple response sampling. The paper introduces an innovative, reference-free approach centered on uncertainty. It mimics human fact-checking by focusing on crucial keywords, identifying unreliable tokens in context, and considering token properties. Their method outperforms existing techniques, achieving state-of-the-art results without requiring extra information. Through experiments on various datasets, the approach demonstrates superior effectiveness across evaluation metrics, offering a promising solution to LLM-generated hallucinations.
Original article: https://arxiv.org/abs/2311.13230