Large language models (LLMs) often fail to synthesize information from their context to generate an accurate response. This renders them unreliable in knowledge intensive settings where reliability of the output is key. A critical component for reliable LLMs is the integration of a robust fact-checking system that can detect hallucinations across various formats. While several open-access fact-checking models are available, their functionality is often limited to specific tasks, such as grounded question-answering or entailment verification, and they perform less effectively in conversational settings. On the other hand, closed-access models like GPT-4 and Claude offer greater flexibility across different contexts, including grounded dialogue verification, but are hindered by high costs and latency. In this work, we introduce VERITAS, a family of hallucination detection models designed to operate flexibly across diverse contexts while minimizing latency and costs. VERITAS achieves state-of-the-art results considering average performance on all major hallucination detection benchmarks, with $10\%$ increase in average performance when compared to similar-sized models and get close to the performance of GPT4 turbo with LLM-as-a-judge setting.
翻译:大型语言模型(LLMs)在综合其上下文信息以生成准确响应方面常常表现不佳。这在输出可靠性至关重要的知识密集型场景中,导致其变得不可靠。构建可靠LLMs的一个关键组成部分是集成一个能够检测多种格式幻觉的鲁棒事实核查系统。尽管已有多个开源事实核查模型可用,但其功能通常局限于特定任务,例如基于事实的问答或蕴涵验证,并且在对话场景中表现欠佳。另一方面,闭源模型如GPT-4和Claude在不同语境(包括基于事实的对话验证)中提供了更大的灵活性,但受到高成本和延迟的阻碍。在本工作中,我们提出了VERITAS,一个旨在以最小延迟和成本灵活运行于多样化语境中的幻觉检测模型系列。VERITAS在所有主要幻觉检测基准测试的平均性能上达到了最先进的结果,与同类规模模型相比平均性能提升了$10\%$,并且在LLM-as-a-judge设置下接近GPT-4 turbo的性能。