While large language models (LLMs) have demonstrated remarkable abilities across various fields, hallucination remains a significant challenge. Recent studies have explored hallucinations through the lens of internal representations, proposing mechanisms to decipher LLMs' adherence to facts. However, these approaches often fail to generalize to out-of-distribution data, leading to concerns about whether internal representation patterns reflect fundamental factual awareness, or only overfit spurious correlations on the specific datasets. In this work, we investigate whether a universal truthfulness hyperplane that distinguishes the model's factually correct and incorrect outputs exists within the model. To this end, we scale up the number of training datasets and conduct an extensive evaluation -- we train the truthfulness hyperplane on a diverse collection of over 40 datasets and examine its cross-task, cross-domain, and in-domain generalization. Our results indicate that increasing the diversity of the training datasets significantly enhances the performance in all scenarios, while the volume of data samples plays a less critical role. This finding supports the optimistic hypothesis that a universal truthfulness hyperplane may indeed exist within the model, offering promising directions for future research.
翻译:尽管大语言模型(LLMs)已在多个领域展现出卓越能力,幻觉问题仍是重大挑战。近期研究通过内部表征的视角探索幻觉现象,提出了解码大语言模型遵循事实的机制。然而,这些方法往往难以泛化至分布外数据,引发了对内部表征模式究竟反映基础事实认知,还是仅在特定数据集上过拟合伪相关性的质疑。本研究旨在探究模型内部是否存在能够区分其事实正确与错误输出的普适真实性超平面。为此,我们扩展了训练数据集规模并开展广泛评估——基于超过40个多样化数据集训练真实性超平面,并检验其跨任务、跨领域及领域内的泛化能力。实验结果表明,增加训练数据集的多样性可显著提升所有场景下的性能,而数据样本量的影响相对有限。这一发现支持了乐观假设:模型内部可能确实存在普适真实性超平面,为未来研究提供了有前景的方向。