In the era of large language models, applying techniques such as Retrieval Augmented Generation can better address Open-Domain Question-Answering problems. Due to constraints including model sizes and computing resources, the length of context is often limited, and it becomes challenging to empower the model to cover overlong contexts while answering questions from open domains. This paper proposes a general and convenient method to covering longer contexts in Open-Domain Question-Answering tasks. It leverages a small encoder language model that effectively encodes contexts, and the encoding applies cross-attention with origin inputs. With our method, the origin language models can cover several times longer contexts while keeping the computing requirements close to the baseline. Our experiments demonstrate that after fine-tuning, there is improved performance across two held-in datasets, four held-out datasets, and also in two In Context Learning settings.
翻译:在大语言模型时代,应用检索增强生成等技术能更好地解决开放领域问答问题。由于模型规模和计算资源等限制,上下文长度通常受限,使得模型在回答开放领域问题时难以覆盖过长的上下文。本文提出了一种通用且便捷的方法,用于在开放领域问答任务中覆盖更长的上下文。该方法利用一个小型编码器语言模型有效地编码上下文,并通过交叉注意力机制将编码与原始输入相结合。使用我们的方法,原始语言模型能够覆盖数倍于基线长度的上下文,同时保持计算需求接近基线水平。实验表明,经过微调后,模型在两个保留数据集、四个外推数据集以及两种上下文学习设置中均表现出性能提升。