Dynamic gestures enable the transfer of directive information to a robot. Moreover, the ability of a robot to recognize them from a long distance makes communication more effective and practical. However, current state-of-the-art models for dynamic gestures exhibit limitations in recognition distance, typically achieving effective performance only within a few meters. In this work, we propose a model for recognizing dynamic gestures from a long distance of up to 20 meters. The model integrates the SlowFast and Transformer architectures (SFT) to effectively process and classify complex gesture sequences captured in video frames. SFT demonstrates superior performance over existing models.
翻译:动态手势能够向机器人传递指令信息。此外,机器人从远距离识别动态手势的能力使得人机交互更为高效和实用。然而,当前最先进的动态手势识别模型在识别距离上存在局限,通常仅在数米范围内才能实现有效性能。在本研究中,我们提出了一种能够从远达20米的距离识别动态手势的模型。该模型集成了SlowFast与Transformer架构(SFT),以有效处理和分类视频帧中捕获的复杂手势序列。SFT展现出优于现有模型的性能。