How do large language models (LLMs) know what they know? Answering this question has been difficult because pre-training data is often a "black box" -- unknown or inaccessible. The recent release of nanochat -- a family of small LLMs with fully open pre-training data -- addresses this as it provides a transparent view into where a model's parametric knowledge comes from. Towards the goal of understanding how knowledge is encoded by LLMs, we release NanoKnow, a benchmark dataset that partitions questions from Natural Questions and SQuAD into splits based on whether their answers are present in nanochat's pre-training corpus. Using these splits, we can now properly disentangle the sources of knowledge that LLMs rely on when producing an output. To demonstrate NanoKnow's utility, we conduct experiments using eight nanochat checkpoints. Our findings show: (1) closed-book accuracy is strongly influenced by answer frequency in the pre-training data, (2) providing external evidence can mitigate this frequency dependence, (3) even with external evidence, models are more accurate when answers were seen during pre-training, demonstrating that parametric and external knowledge are complementary, and (4) non-relevant information is harmful, with accuracy decreasing based on both the position and the number of non-relevant contexts. We release all NanoKnow artifacts at https://github.com/castorini/NanoKnow.
翻译:大型语言模型(LLM)如何知晓其掌握的知识?由于预训练数据通常是“黑箱”——未知或不可访问,回答这一问题一直存在困难。近期发布的nanochat系列——一个具有完全开放预训练数据的小型LLM家族——为解决该问题提供了突破口,它使模型参数化知识的来源变得透明可溯。为深入理解LLM如何编码知识,我们发布了NanoKnow基准数据集,该数据集将来自Natural Questions和SQuAD的问题根据其答案是否存在于nanochat预训练语料库中进行划分。借助这些划分,我们现在能够准确解析LLM在生成输出时所依赖的知识来源。为展示NanoKnow的实用价值,我们使用八个nanochat检查点进行了实验。研究发现:(1)闭卷准确率受预训练数据中答案出现频率的显著影响;(2)提供外部证据可缓解这种频率依赖性;(3)即使存在外部证据,当答案曾在预训练中出现时模型仍表现更优,这证明参数化知识与外部知识具有互补性;(4)无关信息会产生负面影响,其干扰程度随无关上下文的位置和数量增加而加剧。所有NanoKnow相关资源已发布于https://github.com/castorini/NanoKnow。