Retrieval-augmented generation models augment knowledge encoded in a language model by providing additional relevant external knowledge (context) during generation. Although it has been shown that the quantity and quality of context impact the performance of retrieval-augmented generation models during inference, limited research explores how these characteristics affect model training. This paper explores how context quantity and quality during model training affect the performance of Fusion-in-Decoder (FiD), the state-of-the-art retrieval-augmented generation model, in extractive open-domain question answering tasks. Experimental results suggest that FiD models overfit to context quality during training and show suboptimal performance when evaluated on different context quality. Through the experimental results, we also reveal FiD models trained with different context quality have different cross-attention distribution patterns. Specifically, as context quality during training increases, FiD models tend to attend more uniformly to each passage in context. Finally, based on these observations, we propose a method to mitigate overfitting to specific context quality by introducing bias to the cross-attention distribution, which we demonstrate to be effective in improving the performance of FiD models on different context quality.
翻译:检索增强生成模型通过在生成时提供额外相关外部知识(上下文)来增强语言模型中编码的知识。虽然已有研究表明推理时上下文的数量和质量会影响检索增强生成模型的性能,但有限的研究探索了这些特征如何影响模型训练。本文探讨了模型训练时上下文数量和质量如何影响融合解码器(FiD)——最先进的检索增强生成模型——在抽取式开放域问答任务中的表现。实验结果表明,FiD模型在训练时会过拟合于上下文质量,并在不同上下文质量评估时表现次优。通过实验结果,我们还揭示了用不同上下文质量训练的FiD模型具有不同的交叉注意力分布模式。具体而言,随着训练时上下文质量的提升,FiD模型倾向于更均匀地关注上下文中的每个段落。最后,基于这些观察,我们提出了一种方法,通过向交叉注意力分布引入偏置来缓解对特定上下文质量的过拟合,实验证明该方法能有效提升FiD模型在不同上下文质量下的表现。