Hallucination, the generation of factually incorrect content, is a growing challenge in Large Language Models (LLMs). Existing detection and mitigation methods are often isolated and insufficient for domain-specific needs, lacking a standardized pipeline. This paper introduces THaMES (Tool for Hallucination Mitigations and EvaluationS), an integrated framework and library addressing this gap. THaMES offers an end-to-end solution for evaluating and mitigating hallucinations in LLMs, featuring automated test set generation, multifaceted benchmarking, and adaptable mitigation strategies. It automates test set creation from any corpus, ensuring high data quality, diversity, and cost-efficiency through techniques like batch processing, weighted sampling, and counterfactual validation. THaMES assesses a model's ability to detect and reduce hallucinations across various tasks, including text generation and binary classification, applying optimal mitigation strategies like In-Context Learning (ICL), Retrieval Augmented Generation (RAG), and Parameter-Efficient Fine-tuning (PEFT). Evaluations of state-of-the-art LLMs using a knowledge base of academic papers, political news, and Wikipedia reveal that commercial models like GPT-4o benefit more from RAG than ICL, while open-weight models like Llama-3.1-8B-Instruct and Mistral-Nemo gain more from ICL. Additionally, PEFT significantly enhances the performance of Llama-3.1-8B-Instruct in both evaluation tasks.
翻译:幻觉(即生成事实错误内容)是大型语言模型(LLMs)日益严峻的挑战。现有检测与缓解方法往往相互孤立,难以满足领域特定需求,且缺乏标准化流程。本文提出THaMES(幻觉缓解与评估工具),这是一个解决上述问题的集成框架与函数库。THaMES为LLMs的幻觉评估与缓解提供端到端解决方案,具备自动化测试集生成、多维度基准测试及可适配缓解策略等特性。该工具支持从任意语料库自动创建测试集,通过批处理、加权采样与反事实验证等技术确保数据质量、多样性与成本效益。THaMES可评估模型在文本生成、二元分类等多种任务中检测与缓解幻觉的能力,并应用最优缓解策略,包括上下文学习(ICL)、检索增强生成(RAG)与参数高效微调(PEFT)。基于学术论文、政治新闻和维基百科知识库对前沿LLMs的评估表明:GPT-4o等商业模型从RAG中获益优于ICL,而Llama-3.1-8B-Instruct与Mistral-Nemo等开源权重模型更受益于ICL。此外,PEFT能显著提升Llama-3.1-8B-Instruct在两项评估任务中的性能。