Retrieval-Augmented Generation (RAG) enhances LLM factuality, yet design guidance remains English-centric, limiting insights for morphologically rich languages like Turkish. We address this by constructing a comprehensive Turkish RAG dataset derived from Turkish Wikipedia and CulturaX, comprising question-answer pairs and relevant passage chunks. We benchmark seven stages of the RAG pipeline, from query transformation and reranking to answer refinement, without task-specific fine-tuning. Our results show that complex methods like HyDE maximize accuracy (85%) that is considerably higher than the baseline (78.70%). Also a Pareto-optimal configuration using Cross-encoder Reranking and Context Augmentation achieves comparable performance (84.60%) with much lower cost. We further demonstrate that over-stacking generative modules can degrade performance by distorting morphological cues, whereas simple query clarification with robust reranking offers an effective solution.
翻译:检索增强生成(RAG)技术能提升大语言模型的事实准确性,但其设计指导仍以英语为中心,限制了在土耳其语等形态丰富语言中的应用洞察。为此,我们基于土耳其语维基百科和CulturaX构建了一个全面的土耳其语RAG数据集,包含问答对及相关文本片段。我们在未进行任务特定微调的情况下,对RAG流程的七个阶段(从查询转换、重排序到答案优化)进行了基准测试。结果表明,HyDE等复杂方法能实现最高准确率(85%),显著高于基线(78.70%)。同时,采用交叉编码器重排序与上下文增强的帕累托最优配置,能以更低成本达到相当性能(84.60%)。研究进一步发现,过度堆叠生成模块可能破坏形态学线索导致性能下降,而通过鲁棒的重排序结合简单查询澄清则能提供有效解决方案。