Dynamic hand gestures play a crucial role in conveying nonverbal information for Human-Robot Interaction (HRI), eliminating the need for complex interfaces. Current models for dynamic gesture recognition suffer from limitations in effective recognition range, restricting their application to close proximity scenarios. In this letter, we present a novel approach to recognizing dynamic gestures in an ultra-range distance of up to 28 meters, enabling natural, directive communication for guiding robots in both indoor and outdoor environments. Our proposed SlowFast-Transformer (SFT) model effectively integrates the SlowFast architecture with Transformer layers to efficiently process and classify gesture sequences captured at ultra-range distances, overcoming challenges of low resolution and environmental noise. We further introduce a distance-weighted loss function shown to enhance learning and improve model robustness at varying distances. Our model demonstrates significant performance improvement over state-of-the-art gesture recognition frameworks, achieving a recognition accuracy of 95.1% on a diverse dataset with challenging ultra-range gestures. This enables robots to react appropriately to human commands from a far distance, providing an essential enhancement in HRI, especially in scenarios requiring seamless and natural interaction.
翻译:动态手势在人机交互中对于传递非语言信息至关重要,它消除了对复杂界面的需求。现有的动态手势识别模型在有效识别距离上存在局限,限制了其在近距离场景中的应用。本文提出一种新颖的方法,可在长达28米的超远距离上识别动态手势,从而在室内外环境中实现用于引导机器人的自然、指向性通信。我们提出的SlowFast-Transformer模型有效整合了SlowFast架构与Transformer层,能够高效处理并分类在超远距离捕获的手势序列,克服了低分辨率和环境噪声带来的挑战。我们还引入了一种距离加权损失函数,该函数被证明能够增强学习能力,并提升模型在不同距离下的鲁棒性。我们的模型相较于最先进的手势识别框架展现出显著的性能提升,在包含挑战性超远距离手势的多样化数据集上达到了95.1%的识别准确率。这使得机器人能够对来自远距离的人类指令做出恰当响应,为人机交互提供了关键性增强,特别是在需要无缝自然交互的场景中。