While many capabilities of language models (LMs) improve with increased training budget, the influence of scale on hallucinations is not yet fully understood. Hallucinations come in many forms, and there is no universally accepted definition. We thus focus on studying only those hallucinations where a correct answer appears verbatim in the training set. To fully control the training data content, we construct a knowledge graph (KG)-based dataset, and use it to train a set of increasingly large LMs. We find that for a fixed dataset, larger and longer-trained LMs hallucinate less. However, hallucinating on $\leq5$% of the training data requires an order of magnitude larger model, and thus an order of magnitude more compute, than Hoffmann et al. (2022) reported was optimal. Given this costliness, we study how hallucination detectors depend on scale. While we see detector size improves performance on fixed LM's outputs, we find an inverse relationship between the scale of the LM and the detectability of its hallucinations.
翻译:尽管语言模型的诸多能力随着训练预算的增加而提升,但模型规模对幻觉现象的影响尚未被完全理解。幻觉存在多种形式,且目前尚无普遍接受的定义。因此,本研究聚焦于仅分析那些正确答案在训练集中完全按字面出现时的幻觉现象。为了完全控制训练数据内容,我们构建了一个基于知识图谱的数据集,并利用它训练了一系列规模递增的语言模型。研究发现,在固定数据集条件下,更大规模且训练时间更长的语言模型产生的幻觉更少。然而,要将幻觉率控制在训练数据的$\leq5$%以内,所需模型规模比Hoffmann等人(2022)报告的最优规模大一个数量级,相应的计算成本也增加一个数量级。鉴于这种高昂成本,我们进一步研究了幻觉检测器如何依赖于模型规模。虽然我们发现检测器规模的提升能改善其对固定语言模型输出的检测性能,但研究同时揭示了语言模型规模与其幻觉可检测性之间存在反向关系。