Open Domain Question Answering (ODQA) within natural language processing involves building systems that answer factual questions using large-scale knowledge corpora. Recent advances stem from the confluence of several factors, such as large-scale training datasets, deep learning techniques, and the rise of large language models. High-quality datasets are used to train models on realistic scenarios and enable the evaluation of the system on potentially unseen data. Standardized metrics facilitate comparisons between different ODQA systems, allowing researchers to objectively track advancements in the field. Our study presents a thorough examination of the current landscape of ODQA benchmarking by reviewing 52 datasets and 20 evaluation techniques across textual and multimodal modalities. We introduce a novel taxonomy for ODQA datasets that incorporates both the modality and difficulty of the question types. Additionally, we present a structured organization of ODQA evaluation metrics along with a critical analysis of their inherent trade-offs. Our study aims to empower researchers by providing a framework for the robust evaluation of modern question-answering systems. We conclude by identifying the current challenges and outlining promising avenues for future research and development.
翻译:自然语言处理中的开放域问答旨在构建利用大规模知识语料库回答事实性问题的系统。近期进展源于多种因素的汇聚,包括大规模训练数据集、深度学习技术以及大语言模型的兴起。高质量数据集用于在真实场景中训练模型,并使系统能够在潜在未见数据上进行评估。标准化评估指标促进了不同开放域问答系统间的比较,使研究人员能够客观追踪该领域的发展。本研究通过综述涵盖文本与多模态的52个数据集及20种评估技术,对当前开放域问答基准测试体系进行了全面审视。我们提出了一种融合问题类型模态与难度的新型开放域问答数据集分类体系,并建立了评估指标的结构化组织框架及其内在权衡的批判性分析。本研究旨在通过提供现代问答系统稳健评估框架赋能研究人员。最后,我们指出了当前面临的挑战,并展望了未来研究发展的可行路径。