Large Language Models (LLMs) frequently hallucinate, impeding their reliability in mission-critical situations. One approach to address this issue is to provide citations to relevant sources alongside generated content, enhancing the verifiability of generations. However, citing passages accurately in answers remains a substantial challenge. This paper proposes a weakly-supervised fine-tuning method leveraging factual consistency models (FCMs). Our approach alternates between generating texts with citations and supervised fine-tuning with FCM-filtered citation data. Focused learning is integrated into the objective, directing the fine-tuning process to emphasise the factual unit tokens, as measured by an FCM. Results on the ALCE few-shot citation benchmark with various instruction-tuned LLMs demonstrate superior performance compared to in-context learning, vanilla supervised fine-tuning, and state-of-the-art methods, with an average improvement of $34.1$, $15.5$, and $10.5$ citation F$_1$ points, respectively. Moreover, in a domain transfer setting we show that the obtained citation generation ability robustly transfers to unseen datasets. Notably, our citation improvements contribute to the lowest factual error rate across baselines.
翻译:大型语言模型(LLMs)常产生幻觉性内容,这阻碍了其在关键任务场景中的可靠性。解决该问题的一种方法是在生成内容时提供相关来源的引用,以增强生成内容的可验证性。然而,在答案中准确引用文本段落仍面临重大挑战。本文提出一种利用事实一致性模型(FCMs)的弱监督微调方法。该方法通过交替执行带引用的文本生成与经FCM筛选的引用数据监督微调来实现优化。目标函数中融入了聚焦学习机制,该机制根据FCM的度量结果,引导微调过程重点关注事实单元标记。在ALCE少样本引用基准测试中,使用多种指令微调LLMs的实验结果表明,相较于上下文学习、普通监督微调及现有最优方法,本方法分别实现了平均$34.1$、$15.5$和$10.5$个引用F$_1$分数的提升。此外,在领域迁移场景中,我们证明所获得的引用生成能力能够稳健地迁移至未见数据集。值得注意的是,本方法的引用改进使得事实错误率在所有基线方法中达到最低。