The training of high-quality, robust machine learning models for speech-driven 3D facial animation requires a large, diverse dataset of high-quality audio-animation pairs. To overcome the lack of such a dataset, recent work has introduced large pre-trained speech encoders that are robust to variations in the input audio and, therefore, enable the facial animation model to generalize across speakers, audio quality, and languages. However, the resulting facial animation models are prohibitively large and lend themselves only to offline inference on a dedicated machine. In this work, we explore on-device, real-time facial animation models in the context of game development. We overcome the lack of large datasets by using hybrid knowledge distillation with pseudo-labeling. Given a large audio dataset, we employ a high-performing teacher model to train very small student models. In contrast to the pre-trained speech encoders, our student models only consist of convolutional and fully-connected layers, removing the need for attention context or recurrent updates. In our experiments, we demonstrate that we can reduce the memory footprint to up to 3.4 MB and required future audio context to up to 81 ms while maintaining high-quality animations. This paves the way for on-device inference, an important step towards realistic, model-driven digital characters.
翻译:训练用于语音驱动的3D面部动画的高质量、鲁棒机器学习模型,需要大规模、多样化且高质量的音画配对数据集。为克服此类数据集的匮乏,近期研究引入了大型预训练语音编码器,这些编码器对输入音频的变化具有鲁棒性,从而使面部动画模型能够跨说话者、音频质量和语言进行泛化。然而,由此产生的面部动画模型规模过大,仅适用于专用机器的离线推理。本研究聚焦于游戏开发背景下的设备端实时面部动画模型。我们通过采用混合知识蒸馏与伪标注技术来应对大规模数据集的缺失问题。给定大型音频数据集,我们利用高性能教师模型训练极小型学生模型。与预训练语音编码器不同,我们的学生模型仅由卷积层和全连接层构成,无需注意力上下文或循环更新机制。实验结果表明,我们能够将内存占用降低至3.4 MB,所需未来音频上下文缩短至81 ms,同时保持高质量的动画输出。这为设备端推理铺平了道路,是实现逼真、模型驱动的数字角色的重要一步。