In multi-session dialog system, it is essential to continuously update the memory as the session progresses. Simply accumulating memory can make it difficult to focus on the content of the conversation for inference due to the limited input sentence size. Therefore, efficient and accurate conversation model that is capable of managing memory to reflect the conversation history continuously is necessary. This paper presents a conversation model that efficiently manages memory as sessions progress and incorporates this into the model to reflect the conversation history accurately with 3 methodologies: SFT, DPO and DPO with SFT model. Our model using DPO algorithm shows an improvement about 0.0591 of BERTScore in memory accuracy, and the rate of responses reflecting the memory increased as well. Also, response generation performance enhanced about 4.292 in fluency, 3.935 in coherence, and 2.896 in consistency. This paper describes a training method that yields better performance than models with more than twice the parameter size, even when the model size is smaller. Thus, our model demonstrates efficiency not only in terms of accuracy but also in resource utilization.
翻译:在多轮对话系统中,随着会话的推进持续更新记忆至关重要。由于输入语句长度受限,单纯累积记忆会导致模型难以聚焦对话内容进行推理。因此,需要一种能够高效管理记忆以持续反映对话历史的精准对话模型。本文提出一种对话模型,该模型通过三种方法(SFT、DPO及基于SFT模型的DPO)在会话推进过程中高效管理记忆,并将其整合到模型中以准确反映对话历史。采用DPO算法的模型在记忆准确率上实现了约0.0591的BERTScore提升,且响应中反映记忆的比例同步增加。同时,响应生成性能在流畅性、连贯性和一致性方面分别提升约4.292、3.935和2.896。本文描述的训练方法在模型规模更小的情况下,性能优于参数量两倍以上的模型,从而在准确率与资源利用效率方面均展现出优越性。