Advancements in neural engineering have enabled the development of Robotic Prosthetic Hands (RPHs) aimed at restoring hand functionality. Current commercial RPHs offer limited control through basic on/off commands. Recent progresses in machine learning enable finger movement decoding with higher degrees of freedom, yet the high computational complexity of such models limits their application in portable devices. Future RPH designs must balance portability, low power consumption, and high decoding accuracy to be practical for individuals with disabilities. To this end, we introduce a novel attractor-based neural network to realize on-chip movement decoding for next-generation portable RPHs. The proposed architecture comprises an encoder, an attention layer, an attractor network, and a refinement regressor. We tested our model on four healthy subjects and achieved a decoding accuracy of 80.3%. Our proposed model is over 120 and 50 times more compact compared to state-of-the-art LSTM and CNN models, respectively, with comparable (or superior) decoding accuracy. Therefore, it exhibits minimal hardware complexity and can be effectively integrated as a System-on-Chip.
翻译:神经工程学的进步推动了旨在恢复手部功能的机器人假肢手(RPH)的发展。当前商用RPH仅能通过基本的开关命令提供有限的控制。机器学习的最新进展使得具有更高自由度的指部运动解码成为可能,然而此类模型的高计算复杂度限制了其在便携设备中的应用。未来的RPH设计必须在便携性、低功耗和高解码精度之间取得平衡,才能对残障人士具有实用价值。为此,我们引入了一种新颖的基于吸引子的神经网络,以实现面向下一代便携式RPH的片上运动解码。所提出的架构包含一个编码器、一个注意力层、一个吸引子网络和一个精细化回归器。我们在四名健康受试者上测试了我们的模型,实现了80.3%的解码准确率。与最先进的LSTM和CNN模型相比,我们提出的模型分别紧凑超过120倍和50倍,同时保持相当(或更优)的解码精度。因此,它展现出极低的硬件复杂度,并能够有效地集成为片上系统。