Retrieval-Augmented Generation (RAG) offers a promising solution to address various limitations of Large Language Models (LLMs), such as hallucination and difficulties in keeping up with real-time updates. This approach is particularly critical in expert and domain-specific applications where LLMs struggle to cover expert knowledge. Therefore, evaluating RAG models in such scenarios is crucial, yet current studies often rely on general knowledge sources like Wikipedia to assess the models' abilities in solving common-sense problems. In this paper, we evaluated LLMs by RAG settings in a domain-specific context, college enrollment. We identified six required abilities for RAG models, including the ability in conversational RAG, analyzing structural information, faithfulness to external knowledge, denoising, solving time-sensitive problems, and understanding multi-document interactions. Each ability has an associated dataset with shared corpora to evaluate the RAG models' performance. We evaluated popular LLMs such as Llama, Baichuan, ChatGLM, and GPT models. Experimental results indicate that existing closed-book LLMs struggle with domain-specific questions, highlighting the need for RAG models to solve expert problems. Moreover, there is room for RAG models to improve their abilities in comprehending conversational history, analyzing structural information, denoising, processing multi-document interactions, and faithfulness in expert knowledge. We expect future studies could solve these problems better.
翻译:检索增强生成(RAG)为解决大型语言模型(LLM)的诸多局限性(如幻觉问题及难以跟上实时更新)提供了一种前景广阔的方案。在专家及领域特定应用中,LLM往往难以覆盖专业知识,因此RAG方法显得尤为关键。然而,当前研究多依赖维基百科等通用知识源来评估模型解决常识性问题的能力,在领域特定场景下的RAG模型评估仍至关重要。本文以高校招生这一特定领域为背景,通过RAG设置对LLM进行评估。我们识别出RAG模型所需的六项核心能力,包括:对话式RAG能力、结构化信息分析能力、对外部知识的忠实性、去噪能力、时效性问题解决能力以及多文档交互理解能力。每项能力均配有共享语料库的对应数据集,用以评估RAG模型的性能。我们对Llama、Baichuan、ChatGLM及GPT等主流LLM进行了评测。实验结果表明,现有闭卷式LLM在处理领域特定问题时存在明显困难,凸显了RAG模型在解决专业问题方面的必要性。此外,RAG模型在理解对话历史、分析结构化信息、去噪处理、多文档交互以及专业知识忠实性等方面仍有提升空间。我们期待未来研究能更好地解决这些问题。