To improve user engagement during conversations with dialogue systems, we must improve individual dialogue responses and dialogue impressions such as consistency, personality, and empathy throughout the entire dialogue. While such dialogue systems have been developing rapidly with the help of large language models (LLMs), reinforcement learning from AI feedback (RLAIF) has attracted attention to align LLM-based dialogue models for such dialogue impressions. In RLAIF, a reward model based on another LLM is used to create a training signal for an LLM-based dialogue model using zero-shot/few-shot prompting techniques. However, evaluating an entire dialogue only by prompting LLMs is challenging. In this study, the supervised fine-tuning (SFT) of LLMs prepared reward models corresponding to 12 metrics related to the impression of the entire dialogue for evaluating dialogue responses. We tuned our dialogue models using the reward model signals as feedback to improve the impression of the system. The results of automatic and human evaluations showed that tuning the dialogue model using our reward model corresponding to dialogue impression improved the evaluation of individual metrics and the naturalness of the dialogue response.
翻译:为提升用户与对话系统交互过程中的参与度,我们不仅需要改进单轮对话回复的质量,还需提升贯穿整个对话的一致性、个性化和共情等整体对话印象。尽管基于大语言模型(LLMs)的对话系统发展迅速,但利用AI反馈的强化学习(RLAIF)技术因其能够将基于LLM的对话模型与上述对话印象目标对齐而备受关注。在RLAIF框架中,通常采用基于另一LLM的奖励模型,通过零样本/少样本提示技术为基于LLM的对话模型生成训练信号。然而,仅通过提示LLM来评估完整对话仍具挑战性。本研究通过有监督微调(SFT)LLM,构建了对应于12项整体对话印象指标的奖励模型,用于评估对话回复。我们利用这些奖励模型生成的信号作为反馈对对话模型进行调优,以提升系统的整体印象。自动评估与人工评估的结果表明,采用对应对话印象的奖励模型进行调优后,对话模型在各项指标上的评价得分及回复的自然度均得到显著提升。