Active speaker detection (ASD) in multimodal environments is crucial for various applications, from video conferencing to human-robot interaction. This paper introduces FabuLight-ASD, an advanced ASD model that integrates facial, audio, and body pose information to enhance detection accuracy and robustness. Our model builds upon the existing Light-ASD framework by incorporating human pose data, represented through skeleton graphs, which minimises computational overhead. Using the Wilder Active Speaker Detection (WASD) dataset, renowned for reliable face and body bounding box annotations, we demonstrate FabuLight-ASD's effectiveness in real-world scenarios. Achieving an overall mean average precision (mAP) of 94.3%, FabuLight-ASD outperforms Light-ASD, which has an overall mAP of 93.7% across various challenging scenarios. The incorporation of body pose information shows a particularly advantageous impact, with notable improvements in mAP observed in scenarios with speech impairment, face occlusion, and human voice background noise. Furthermore, efficiency analysis indicates only a modest increase in parameter count (27.3%) and multiply-accumulate operations (up to 2.4%), underscoring the model's efficiency and feasibility. These findings validate the efficacy of FabuLight-ASD in enhancing ASD performance through the integration of body pose data. FabuLight-ASD's code and model weights are available at https://github.com/knowledgetechnologyuhh/FabuLight-ASD.
翻译:在多模态环境中,主动说话人检测(ASD)对于从视频会议到人机交互等多种应用至关重要。本文介绍了FabuLight-ASD,这是一种先进的ASD模型,它整合了面部、音频和人体姿态信息,以提高检测的准确性和鲁棒性。我们的模型在现有的Light-ASD框架基础上构建,通过融入以骨骼图表示的人体姿态数据,最大限度地减少了计算开销。使用以可靠的面部和人体边界框标注而闻名的Wilder主动说话人检测(WASD)数据集,我们证明了FabuLight-ASD在真实场景中的有效性。FabuLight-ASD实现了94.3%的整体平均精度均值(mAP),优于Light-ASD(其在各种挑战性场景下的整体mAP为93.7%)。人体姿态信息的融入显示出特别有益的影响,在语音受损、面部遮挡和存在人声背景噪声的场景中,观察到mAP有显著提升。此外,效率分析表明,参数数量仅适度增加(27.3%),乘积累加运算量最多增加2.4%,这突显了模型的效率和可行性。这些发现验证了FabuLight-ASD通过整合人体姿态数据来提升ASD性能的有效性。FabuLight-ASD的代码和模型权重可在 https://github.com/knowledgetechnologyuhh/FabuLight-ASD 获取。