State-of-the-art Active Speaker Detection (ASD) approaches mainly use audio and facial features as input. However, the main hypothesis in this paper is that body dynamics is also highly correlated to "speaking" (and "listening") actions and should be particularly useful in wild conditions (e.g., surveillance settings), where face cannot be reliably accessed. We propose ASDnB, a model that singularly integrates face with body information by merging the inputs at different steps of feature extraction. Our approach splits 3D convolution into 2D and 1D to reduce computation cost without loss of performance, and is trained with adaptive weight feature importance for improved complement of face with body data. Our experiments show that ASDnB achieves state-of-the-art results in the benchmark dataset (AVA-ActiveSpeaker), in the challenging data of WASD, and in cross-domain settings using Columbia. This way, ASDnB can perform in multiple settings, which is positively regarded as a strong baseline for robust ASD models (code available at https://github.com/Tiago-Roxo/ASDnB).
翻译:当前最先进的主动说话人检测方法主要使用音频和面部特征作为输入。然而,本文的核心假设是:身体动态同样与“说话”(及“聆听”)行为高度相关,并且在无法可靠获取面部信息的野外条件(例如监控场景)下应尤为有用。我们提出了ASDnB模型,通过在特征提取的不同阶段融合输入信息,独特地整合了面部与身体信息。我们的方法将3D卷积分解为2D与1D卷积以降低计算成本而不损失性能,并采用自适应权重特征重要性进行训练,以提升面部与身体数据的互补性。实验表明,ASDnB在基准数据集(AVA-ActiveSpeaker)、具有挑战性的WASD数据以及使用Columbia数据集的跨域设置中均取得了最先进的结果。因此,ASDnB能够在多种场景下有效工作,这使其被积极视为鲁棒性主动说话人检测模型的强基准(代码发布于https://github.com/Tiago-Roxo/ASDnB)。