Retrieval-Augmented Generation (RAG) offers a promising solution to address various limitations of Large Language Models (LLMs), such as hallucination and difficulties in keeping up with real-time updates. This approach is particularly critical in expert and domain-specific applications where LLMs struggle to cover expert knowledge. Therefore, evaluating RAG models in such scenarios is crucial, yet current studies often rely on general knowledge sources like Wikipedia to assess the models' abilities in solving common-sense problems. In this paper, we evaluated LLMs by RAG settings in a domain-specific context, college enrollment. We identified six required abilities for RAG models, including the ability in conversational RAG, analyzing structural information, faithfulness to external knowledge, denoising, solving time-sensitive problems, and understanding multi-document interactions. Each ability has an associated dataset with shared corpora to evaluate the RAG models' performance. We evaluated popular LLMs such as Llama, Baichuan, ChatGLM, and GPT models. Experimental results indicate that existing closed-book LLMs struggle with domain-specific questions, highlighting the need for RAG models to solve expert problems. Moreover, there is room for RAG models to improve their abilities in comprehending conversational history, analyzing structural information, denoising, processing multi-document interactions, and faithfulness in expert knowledge. We expect future studies could solve these problems better.
翻译:检索增强生成(RAG)为缓解大语言模型(LLM)的幻觉、难以跟进实时更新等局限提供了有前景的解决方案。在专家与领域特定应用中,LLM往往难以覆盖专业知识,该方法显得尤为关键。因此,在此类场景下评估RAG模型至关重要,然而现有研究多依赖维基百科等通用知识源来评估模型解决常识性问题的能力。本文在高校招生这一特定领域背景下,通过RAG设置对LLM进行了评估。我们识别出RAG模型所需的六项核心能力,包括对话式RAG处理、结构化信息解析、外部知识忠实性、噪声过滤、时效性问题求解以及多文档交互理解。每项能力均配有共享语料库的对应数据集,用以评估RAG模型的性能。我们对Llama、Baichuan、ChatGLM及GPT等主流LLM进行了评测。实验结果表明,现有闭卷式LLM在处理领域特定问题时表现欠佳,凸显了RAG模型在解决专家级问题上的必要性。此外,RAG模型在对话历史理解、结构化信息分析、噪声过滤、多文档交互处理及专业知识忠实性等方面仍有提升空间。我们期待未来研究能更好地解决这些问题。