Large language models (LLMs) like Llama, Baichuan and Bloom models show remarkable ability with instruction fine-tuning in many natural language tasks. Nevertheless, for the dialogue summarization task, which aims to generate summaries for different roles in dialogue, most of the state-of-the-art methods conduct on small models (e.g Bart and Bert). Existing methods try to add task specified optimization on small models like adding global-local centrality score to models. In this paper, we propose an instruction fine-tuning model: Baichuan2-Sum, for role-oriented diaglouge summarization. By setting different instructions for different roles, the model can learn from the dialogue interactions and output the expected summaries. Furthermore, we applied NEFTune technique to add suitable noise during training to improve the results. The experiments demonstrate that the proposed model achieves the new state-of-the-art results on two public dialogue summarization datasets: CSDS and SAMSUM. We release our model and related codes to facilitate future studies on dialogue summarization task.
翻译:像Llama、Baichuan和Bloom等大型语言模型在多项自然语言任务中,通过指令微调展现出卓越能力。然而,在面向对话中不同角色生成摘要的对话摘要任务中,现有最先进方法主要基于小型模型(如Bart和Bert)开展。现有方法尝试对小型模型进行任务特定优化,例如引入全局-局部中心性评分。本文提出一种面向角色导向对话摘要的指令微调模型:Baichuan2-Sum。通过为不同角色设置不同指令,该模型能从对话交互中学习并输出期望的摘要。此外,我们应用NEFTune技术在训练过程中添加适当噪声以提升效果。实验表明,所提模型在CSDS和SAMSUM两个公开对话摘要数据集上取得了新的最优结果。我们开源了模型及相关代码,以促进对话摘要任务的后续研究。