Retrieval-Augmented Language Models (RALMs) have significantly improved performance in open-domain question answering (QA) by leveraging external knowledge. However, RALMs still struggle with unanswerable queries, where the retrieved contexts do not contain the correct answer, and with conflicting information, where different sources provide contradictory answers due to imperfect retrieval. This study introduces an in-context learning-based approach to enhance the reasoning capabilities of RALMs, making them more robust in imperfect retrieval scenarios. Our method incorporates Machine Reading Comprehension (MRC) demonstrations, referred to as cases, to boost the model's capabilities to identify unanswerabilities and conflicts among the retrieved contexts. Experiments on two open-domain QA datasets show that our approach increases accuracy in identifying unanswerable and conflicting scenarios without requiring additional fine-tuning. This work demonstrates that in-context learning can effectively enhance the robustness of RALMs in open-domain QA tasks.
翻译:检索增强语言模型通过利用外部知识,在开放域问答任务中显著提升了性能。然而,当检索到的上下文不包含正确答案(即不可回答查询)或不同来源因检索不完善而提供矛盾信息时,RALMs仍面临挑战。本研究提出一种基于上下文学习的方法,旨在增强RALMs的推理能力,使其在不完善检索场景中更具鲁棒性。该方法引入机器阅读理解示范案例,以提升模型识别检索上下文中不可回答性与矛盾信息的能力。在两个开放域问答数据集上的实验表明,该方法无需额外微调即可提高识别不可回答及矛盾场景的准确率。本工作证明,上下文学习能有效增强RALMs在开放域问答任务中的鲁棒性。