We introduce LLMQuoter, a lightweight, distillation-based model designed to enhance Retrieval Augmented Generation (RAG) by extracting the most relevant textual evidence for downstream reasoning tasks. Built on the LLaMA-3B architecture and fine-tuned with Low-Rank Adaptation (LoRA) on a 15,000-sample subset of HotpotQA, LLMQuoter adopts a "quote-first-then-answer" strategy, efficiently identifying key quotes before passing curated snippets to reasoning models. This workflow reduces cognitive overhead and outperforms full-context approaches like Retrieval-Augmented Fine-Tuning (RAFT), achieving over 20-point accuracy gains across both small and large language models. By leveraging knowledge distillation from a high-performing teacher model, LLMQuoter achieves competitive results in a resource-efficient fine-tuning setup. It democratizes advanced RAG capabilities, delivering significant performance improvements without requiring extensive model retraining. Our results highlight the potential of distilled quote-based reasoning to streamline complex workflows, offering a scalable and practical solution for researchers and practitioners alike.
翻译:我们提出LLMQuoter,这是一个基于蒸馏的轻量级模型,旨在通过为下游推理任务提取最相关的文本证据来增强检索增强生成(RAG)能力。该模型基于LLaMA-3B架构构建,并在HotpotQA数据集的15,000个样本子集上使用低秩自适应(LoRA)进行微调。LLMQuoter采用“先引文后回答”策略,在将精选片段传递给推理模型之前,高效识别关键引文。这种工作流程降低了认知负担,并在小型和大型语言模型上均超越了检索增强微调(RAFT)等全上下文方法,实现了超过20个百分点的准确率提升。通过利用高性能教师模型的知识蒸馏,LLMQuoter在资源高效的微调设置中取得了有竞争力的结果。它使先进的RAG能力得以普及,无需大量模型重新训练即可带来显著的性能改进。我们的研究结果突显了基于蒸馏引文的推理在简化复杂工作流程方面的潜力,为研究人员和实践者提供了一个可扩展且实用的解决方案。