This paper presents a novel neuromorphic control architecture for upper-limb prostheses that combines surface electromyography (sEMG) with gaze-guided computer vision. The system uses a spiking neural network deployed on the neuromorphic processor AltAi to classify EMG patterns in real time while an eye-tracking headset and scene camera identify the object within the user's focus. In our prototype, the same EMG recognition model that was originally developed for a conventional GPU is deployed as a spiking network on AltAi, achieving comparable accuracy while operating in a sub-watt power regime, which enables a lightweight, wearable implementation. For six distinct functional gestures recorded from upper-limb amputees, the system achieves robust recognition performance comparable to state-of-the-art myoelectric interfaces. When the vision pipeline restricts the decision space to three context-appropriate gestures for the currently viewed object, recognition accuracy increases to roughly 95% while excluding unsafe, object-inappropriate grasps. These results indicate that the proposed neuromorphic, context-aware controller can provide energy-efficient and reliable prosthesis control and has the potential to improve safety and usability in everyday activities for people with upper-limb amputation.
翻译:本文提出了一种新颖的上肢假肢神经形态控制架构,该架构将表面肌电图与注视引导的计算机视觉相结合。该系统部署在神经形态处理器AltAi上的脉冲神经网络,用于实时分类肌电信号模式,同时通过眼动追踪头戴设备和场景摄像头识别用户注视焦点内的物体。在我们的原型中,最初为传统GPU开发的相同肌电识别模型,以脉冲网络形式部署于AltAi上,在亚瓦级功耗下实现了相当的识别精度,从而支持轻量级、可穿戴的实现方案。针对从上肢截肢者记录的六种不同功能手势,该系统实现了与最先进的肌电接口相当的稳健识别性能。当视觉处理流程将决策空间限制为当前注视物体对应的三种情境适宜手势时,识别准确率提升至约95%,同时排除了不安全且与物体不匹配的抓握方式。这些结果表明,所提出的神经形态情境感知控制器能够提供高能效且可靠的假肢控制,并具备改善上肢截肢者日常活动安全性与可用性的潜力。