Conversational question answering increasingly relies on retrieval-augmented generation (RAG) to ground large language models (LLMs) in external knowledge. Yet, most existing studies evaluate RAG methods in isolation and primarily focus on single-turn settings. This paper addresses the lack of a systematic comparison of RAG methods for multi-turn conversational QA, where dialogue history, coreference, and shifting user intent substantially complicate retrieval. We present a comprehensive empirical study of vanilla and advanced RAG methods across eight diverse conversational QA datasets spanning multiple domains. Using a unified experimental setup, we evaluate retrieval quality and answer generation using generator and retrieval metrics, and analyze how performance evolves across conversation turns. Our results show that robust yet straightforward methods, such as reranking, hybrid BM25, and HyDE, consistently outperform vanilla RAG. In contrast, several advanced techniques fail to yield gains and can even degrade performance below the No-RAG baseline. We further demonstrate that dataset characteristics and dialogue length strongly influence retrieval effectiveness, explaining why no single RAG strategy dominates across settings. Overall, our findings indicate that effective conversational RAG depends less on method complexity than on alignment between the retrieval strategy and the dataset structure. We publish the code used.\footnote{\href{https://github.com/Klejda-A/exp-rag.git}{GitHub Repository}}
翻译:对话式问答系统日益依赖检索增强生成技术,将大语言模型与外部知识进行关联。然而,现有研究大多孤立地评估RAG方法,且主要关注单轮问答场景。本文针对多轮对话式问答中缺乏系统性RAG方法比较的问题展开研究——在该场景下,对话历史、指代消解及动态变化的用户意图会显著增加检索复杂度。我们通过对八个跨领域对话式问答数据集的实证研究,系统比较了基础版与进阶版RAG方法。采用统一实验设置,通过生成器指标和检索指标评估检索质量与答案生成效果,并分析模型性能随对话轮次的变化规律。实验结果表明,重排序、混合BM25和HyDE等鲁棒性强的简洁方法持续优于基础版RAG;而部分进阶技术不仅未能带来增益,甚至可能使性能低于无RAG基线。研究进一步揭示数据集特性与对话长度会显著影响检索效能,这解释了为何不存在适用于所有场景的通用RAG策略。总体而言,本研究表明对话式RAG系统的有效性更取决于检索策略与数据集结构的适配度,而非方法复杂度。相关代码已开源发布。\footnote{\href{https://github.com/Klejda-A/exp-rag.git}{GitHub代码仓库}}