Retrieval Augmented Generation (RAG), a paradigm that integrates external contextual information with large language models (LLMs) to enhance factual accuracy and relevance, has emerged as a pivotal area in generative AI. The LLMs used in RAG applications are required to faithfully and completely comprehend the provided context and users' questions, avoid hallucination, handle unanswerable, counterfactual or otherwise low-quality and irrelevant contexts, perform complex multi-hop reasoning and produce reliable citations. In this paper, we introduce SFR-RAG, a small LLM that is instruction-tuned with an emphasis on context-grounded generation and hallucination minimization. We also present ContextualBench, a new evaluation framework compiling multiple popular and diverse RAG benchmarks, such as HotpotQA and TriviaQA, with consistent RAG settings to ensure reproducibility and consistency in model assessments. Experimental results demonstrate that our SFR-RAG-9B model outperforms leading baselines such as Command-R+ (104B) and GPT-4o, achieving state-of-the-art results in 3 out of 7 benchmarks in ContextualBench with significantly fewer parameters. The model is also shown to be resilient to alteration in the contextual information and behave appropriately when relevant context is removed. Additionally, the SFR-RAG model maintains competitive performance in general instruction-following tasks and function-calling capabilities.
翻译:检索增强生成(RAG)是一种将外部上下文信息与大型语言模型(LLM)相结合以提升事实准确性和相关性的范式,已成为生成式人工智能的关键领域。在RAG应用中,所使用的LLM需要忠实且完整地理解所提供的上下文和用户问题,避免产生幻觉,处理无法回答、反事实或其他低质量及不相关的上下文,执行复杂的多跳推理,并提供可靠的引用。本文介绍了SFR-RAG,这是一个经过指令微调的小型LLM,其重点在于基于上下文的生成和幻觉最小化。我们还提出了ContextualBench,一个新的评估框架,该框架整合了多个流行且多样化的RAG基准测试(如HotpotQA和TriviaQA),并采用一致的RAG设置,以确保模型评估的可复现性和一致性。实验结果表明,我们的SFR-RAG-9B模型在参数数量显著更少的情况下,优于Command-R+(104B)和GPT-4o等领先基线模型,在ContextualBench的7个基准测试中的3个取得了最先进的结果。该模型还被证明对上下文信息的修改具有鲁棒性,并在相关上下文被移除时表现出适当的行为。此外,SFR-RAG模型在通用指令遵循任务和函数调用能力方面保持了有竞争力的性能。