Recent work has explored the capability of large language models (LLMs) to identify and correct errors in LLM-generated responses. These refinement approaches frequently evaluate what sizes of models are able to do refinement for what problems, but less attention is paid to what effective feedback for refinement looks like. In this work, we propose looking at refinement with feedback as a composition of three distinct LLM competencies: (1) identification of bad generations; (2) fine-grained natural language feedback generation; (3) refining with fine-grained feedback. The first step can be implemented with a high-performing discriminative model and steps 2 and 3 can be implemented either via prompted or fine-tuned LLMs. A key property of this approach is that the step 2 critique model can give fine-grained feedback about errors, made possible by offloading the discrimination to a separate model in step 1. We show that models of different capabilities benefit from refining with this approach on the task of improving factual consistency of document grounded summaries. Overall, our proposed method consistently outperforms existing end-to-end refinement approaches and current trained models not fine-tuned for factuality critiquing.
翻译:近期研究探索了大语言模型(LLM)在识别与修正LLM生成响应错误方面的能力。现有精炼方法多关注何种规模的模型能够处理何种问题的精炼,但对有效反馈的具体形态关注不足。本研究提出将基于反馈的精炼过程解构为三个独立的LLM能力模块:(1)错误生成的识别;(2)细粒度自然语言反馈的生成;(3)基于细粒度反馈的精炼。第一步可通过高性能判别模型实现,第二步与第三步可通过提示工程或微调后的LLM完成。该方法的核心特性在于:通过将判别任务分流至独立的步骤1模型,步骤2的评判模型得以生成针对具体错误的细粒度反馈。我们在提升文档摘要事实一致性的任务中验证了该方法对不同能力模型的改进效果。实验表明,我们提出的方法在整体性能上持续优于现有端到端精炼方案,以及未针对事实评判进行专门微调的当前训练模型。