Retrieval-Augmented Generation (RAG) is a powerful approach that enables large language models (LLMs) to incorporate external knowledge. However, evaluating the effectiveness of RAG systems in specialized scenarios remains challenging due to the high costs of data construction and the lack of suitable evaluation metrics. This paper introduces RAGEval, a framework designed to assess RAG systems across diverse scenarios by generating high-quality documents, questions, answers, and references through a schema-based pipeline. With a focus on factual accuracy, we propose three novel metrics: Completeness, Hallucination, and Irrelevance to evaluate LLM generated responses rigorously. Experimental results show that RAGEval outperforms zero-shot and one-shot methods in terms of clarity, safety, conformity, and richness of generated samples. Furthermore, the use of LLMs for scoring the proposed metrics demonstrates a high level of consistency with human evaluations. RAGEval establishes a new paradigm for evaluating RAG systems in real-world applications. The code and dataset are released at https://github.com/OpenBMB/RAGEval.
翻译:检索增强生成(RAG)是一种强大的方法,能够使大语言模型(LLM)整合外部知识。然而,由于数据构建成本高昂且缺乏合适的评估指标,在特定场景中评估RAG系统的有效性仍然具有挑战性。本文介绍了RAGEval,这是一个通过基于模式的流程生成高质量文档、问题、答案和参考依据,从而评估不同场景下RAG系统的框架。着眼于事实准确性,我们提出了三个新颖的指标:完整性、幻觉性和无关性,以严格评估LLM生成的响应。实验结果表明,RAGEval在生成样本的清晰度、安全性、一致性和丰富性方面优于零样本和单样本方法。此外,使用LLM对所提指标进行评分的结果与人工评估具有高度一致性。RAGEval为现实应用中评估RAG系统建立了新的范式。代码和数据集发布于 https://github.com/OpenBMB/RAGEval。