Enabling Large Language Models (LLMs) to generate citations in Question-Answering (QA) tasks is an emerging paradigm aimed at enhancing the verifiability of their responses when LLMs are utilizing external references to generate an answer. However, there is currently no unified framework to standardize and fairly compare different citation generation methods, leading to difficulties in reproducing different methods and a comprehensive assessment. To cope with the problems above, we introduce \name, an open-source and modular toolkit designed to facilitate the implementation and evaluation of existing citation generation methods, while also fostering the development of new approaches to improve citation quality in LLM outputs. This tool is highly extensible, allowing users to utilize 4 main modules and 14 components to construct a pipeline, evaluating an existing method or innovative designs. Our experiments with two state-of-the-art LLMs and 11 citation generation baselines demonstrate varying strengths of different modules in answer accuracy and citation quality improvement, as well as the challenge of enhancing granularity. Based on our analysis of the effectiveness of components, we propose a new method, self-RAG \snippet, obtaining a balanced answer accuracy and citation quality. Citekit is released at https://github.com/SjJ1017/Citekit.
翻译:使大语言模型(LLMs)在问答任务中生成引文是一种新兴范式,旨在当LLMs利用外部参考生成答案时增强其回答的可验证性。然而,目前缺乏统一框架来标准化并公平比较不同的引文生成方法,导致不同方法的复现困难与全面评估受阻。为解决上述问题,我们推出Citekit,这是一个开源且模块化的工具包,旨在促进现有引文生成方法的实现与评估,同时推动开发新方法以提升LLM输出中的引文质量。该工具具备高度可扩展性,用户可利用4个主要模块和14个组件构建流程,以评估现有方法或创新设计。我们使用两种前沿LLM和11种引文生成基线方法进行的实验表明,不同模块在答案准确性和引文质量提升方面各具优势,同时也揭示了提升细粒度的挑战。基于对组件有效性的分析,我们提出一种新方法self-RAG \snippet,实现了答案准确性与引文质量的平衡。Citekit发布于https://github.com/SjJ1017/Citekit。