In dyadic interaction, predicting the listener's facial reactions is challenging as different reactions could be appropriate in response to the same speaker's behaviour. Previous approaches predominantly treated this task as an interpolation or fitting problem, emphasizing deterministic outcomes but ignoring the diversity and uncertainty of human facial reactions. Furthermore, these methods often failed to model short-range and long-range dependencies within the interaction context, leading to issues in the synchrony and appropriateness of the generated facial reactions. To address these limitations, this paper reformulates the task as an extrapolation or prediction problem, and proposes an novel framework (called ReactFace) to generate multiple different but appropriate facial reactions from a speaker behaviour rather than merely replicating the corresponding listener facial behaviours. Our ReactFace generates multiple different but appropriate photo-realistic human facial reactions by: (i) learning an appropriate facial reaction distribution representing multiple different but appropriate facial reactions; and (ii) synchronizing the generated facial reactions with the speaker verbal and non-verbal behaviours at each time stamp, resulting in realistic 2D facial reaction sequences. Experimental results demonstrate the effectiveness of our approach in generating multiple diverse, synchronized, and appropriate facial reactions from each speaker's behaviour. The quality of the generated facial reactions is intimately tied to the speaker's speech and facial expressions, achieved through our novel speaker-listener interaction modules. Our code is made publicly available at \url{https://github.com/lingjivoo/ReactFace}.
翻译:在二元互动中,预测听者的面部反应具有挑战性,因为针对同一说话者的行为可能存在多种恰当的反应。先前的研究方法主要将此任务视为插值或拟合问题,强调确定性结果,但忽略了人类面部反应的多样性和不确定性。此外,这些方法通常未能对互动情境中的短程与长程依赖关系进行建模,导致生成的面部反应在同步性与恰当性方面存在问题。为克服这些局限,本文将该任务重新定义为外推或预测问题,并提出一种新颖框架(称为ReactFace),旨在从说话者行为生成多种不同但恰当的面部反应,而非仅仅复现对应听者的面部行为。我们的ReactFace通过以下方式生成多种不同但恰当的照片级真实人脸反应:(i)学习一个表征多种不同但恰当面部反应的恰当面部反应分布;(ii)在每个时间戳将生成的面部反应与说话者的言语及非言语行为同步,从而产生逼真的二维面部反应序列。实验结果表明,我们的方法能有效从每位说话者的行为中生成多样、同步且恰当的面部反应。生成的面部反应质量与说话者的语音和面部表情密切相关,这得益于我们新颖的说话者-听者交互模块。我们的代码已公开于\url{https://github.com/lingjivoo/ReactFace}。