Language Models (LMs) memorize a vast amount of factual knowledge, exhibiting strong performance across diverse tasks and domains. However, it has been observed that the performance diminishes when dealing with less-popular or low-frequency concepts and entities, for example in domain specific applications. The two prominent approaches to enhance the performance of LMs on low-frequent topics are: Retrieval Augmented Generation (RAG) and fine-tuning (FT) over synthetic data. This paper explores and evaluates the impact of RAG and FT on customizing LMs in handling low-frequency entities on question answering tasks. We conduct extensive experiments on twelve LMs of varying size and type and different fine tuning, data augmentation, and retrieval models. Our findings indicate that while FT boosts the performance across entities of varying popularity, RAG surpasses FT by a large margin particularly for least popular factual knowledge. Additionally, the success of both RAG and FT approaches is amplified by improving retrieval and data augmentation techniques. Fine tuning, while beneficial for small LMs, requires extensive resources. To address this issue, we propose the new Stimulus RAG approach that surpasses the effectiveness of fine tuning based approaches, thereby eliminating the need for the costly data augmentation and fine tuning step for enriching LMs with less popular factual knowledge.
翻译:语言模型(Language Models, LMs)能够记忆大量事实性知识,并在多种任务和领域中表现出色。然而,当处理冷门或低频概念与实体(例如在特定领域应用中)时,其性能会显著下降。目前提升语言模型在低频主题上性能的两种主流方法是:检索增强生成(Retrieval Augmented Generation, RAG)与基于合成数据的微调(fine-tuning, FT)。本文探讨并评估了RAG与FT在定制化语言模型处理低频实体问答任务中的影响。我们在十二个不同规模和类型的语言模型上进行了大量实验,并测试了不同的微调、数据增强和检索模型。研究结果表明,虽然FT对所有流行度级别的实体均有性能提升,但RAG在冷门事实知识处理上显著优于FT,尤其在最低频知识上优势明显。此外,改进检索与数据增强技术能进一步放大RAG与FT方法的成效。微调虽对小型语言模型有益,但需要大量资源。为解决此问题,我们提出了新型Stimulus RAG方法,其效果超越了基于微调的方法,从而无需昂贵的数据增强与微调步骤即可为语言模型注入冷门事实知识。