Emotion recognition and touch gesture decoding are crucial for advancing human-robot interaction (HRI), especially in social environments where emotional cues and tactile perception play important roles. However, many humanoid robots, such as Pepper, Nao, and Furhat, lack full-body tactile skin, limiting their ability to engage in touch-based emotional and gesture interactions. In addition, vision-based emotion recognition methods usually face strict GDPR compliance challenges due to the need to collect personal facial data. To address these limitations and avoid privacy issues, this paper studies the potential of using the sounds produced by touching during HRI to recognise tactile gestures and classify emotions along the arousal and valence dimensions. Using a dataset of tactile gestures and emotional interactions from 28 participants with the humanoid robot Pepper, we design an audio-only lightweight touch gesture and emotion recognition model with only 0.24M parameters, 0.94MB model size, and 0.7G FLOPs. Experimental results show that the proposed sound-based touch gesture and emotion recognition model effectively recognises the arousal and valence states of different emotions, as well as various tactile gestures, when the input audio length varies. The proposed model is low-latency and achieves similar results as well-known pretrained audio neural networks (PANNs), but with much smaller FLOPs, parameters, and model size.
翻译:情感识别与触控手势解码对于推进人机交互(HRI)至关重要,尤其在社交环境中,情感线索与触觉感知扮演着重要角色。然而,许多人形机器人(如Pepper、Nao和Furhat)缺乏全身触觉皮肤,限制了其参与基于触觉的情感与手势交互的能力。此外,基于视觉的情感识别方法通常因需采集个人面部数据而面临严格的《通用数据保护条例》(GDPR)合规挑战。为应对这些局限并规避隐私问题,本文研究了利用HRI过程中触摸产生的声音来识别触觉手势,并沿唤醒度与效价维度对情感进行分类的潜力。基于28名参与者与人形机器人Pepper的触觉手势及情感交互数据集,我们设计了一种纯音频的轻量级触控手势与情感识别模型,其参数量仅为0.24M,模型大小0.94MB,计算量为0.7G FLOPs。实验结果表明,当输入音频长度变化时,所提出的基于声音的触控手势与情感识别模型能有效识别不同情感的唤醒度与效价状态,以及多种触觉手势。该模型具有低延迟特性,其性能与知名预训练音频神经网络(PANNs)相当,但所需的FLOPs、参数量及模型规模显著减小。