In this paper, we introduce a new task, Reactive Listener Motion Generation from Speaker Utterance, which aims to generate naturalistic listener body motions that appropriately respond to a speaker's utterance. However, modeling such nonverbal listener behaviors remains underexplored and challenging due to the inherently non-deterministic nature of human reactions. To facilitate this task, we present ReactMotionNet, a large-scale dataset that pairs speaker utterances with multiple candidate listener motions annotated with varying degrees of appropriateness. This dataset design explicitly captures the one-to-many nature of listener behavior and provides supervision beyond a single ground-truth motion. Building on this dataset design, we develop preference-oriented evaluation protocols tailored to evaluate reactive appropriateness, where conventional motion metrics focusing on input-motion alignment ignore. We further propose ReactMotion, a unified generative framework that jointly models text, audio, emotion, and motion, and is trained with preference-based objectives to encourage both appropriate and diverse listener responses. Extensive experiments show that ReactMotion outperforms retrieval baselines and cascaded LLM-based pipelines, generating more natural, diverse, and appropriate listener motions.
翻译:本文提出了一项新任务——基于说话者话语的反应式倾听者动作生成,旨在生成能恰当回应说话者话语的自然倾听者身体动作。然而,由于人类反应本质上具有非确定性,对此类非语言倾听行为进行建模仍处于探索不足且充满挑战的阶段。为推进该任务,我们提出了ReactMotionNet——一个大规模数据集,该数据集将说话者话语与多个候选倾听者动作进行配对,并标注了不同等级的恰当性。该数据集设计明确捕捉了倾听行为的一对多特性,并提供了超越单一真实动作的监督信息。基于此数据集设计,我们开发了面向偏好的评估方案,专门用于评估反应恰当性,而传统关注输入-动作对齐的动作度量指标则忽视了这一维度。我们进一步提出ReactMotion——一个统一的生成框架,该框架联合建模文本、音频、情感与动作,并通过基于偏好的目标进行训练,以鼓励生成既恰当又多样化的倾听者反应。大量实验表明,ReactMotion在检索基线和基于级联LLM的流程上均表现出优越性,能生成更自然、多样且恰当的倾听者动作。