Human-machine interaction, particularly in prosthetic and robotic control, has seen progress with gesture recognition via surface electromyographic (sEMG) signals.However, classifying similar gestures that produce nearly identical muscle signals remains a challenge, often reducing classification accuracy. Traditional deep learning models for sEMG gesture recognition are large and computationally expensive, limiting their deployment on resource-constrained embedded systems. In this work, we propose WaveFormer, a lightweight transformer-based architecture tailored for sEMG gesture recognition. Our model integrates time-domain and frequency-domain features through a novel learnable wavelet transform, enhancing feature extraction. In particular, the WaveletConv module, a multi-level wavelet decomposition layer with depthwise separable convolution, ensures both efficiency and compactness. With just 3.1 million parameters, WaveFormer achieves 95% classification accuracy on the EPN612 dataset, outperforming larger models. Furthermore, when profiled on a laptop equipped with an Intel CPU, INT8 quantization achieves real-time deployment with a 6.75 ms inference latency.
翻译:人机交互,特别是在假肢与机器人控制领域,通过表面肌电信号进行手势识别已取得进展。然而,对产生近乎相同肌肉信号的相似手势进行分类仍然是一个挑战,通常会降低分类准确性。用于sEMG手势识别的传统深度学习模型通常规模庞大且计算成本高昂,限制了其在资源受限的嵌入式系统上的部署。在本工作中,我们提出了WaveFormer,一种专为sEMG手势识别设计的轻量级基于Transformer的架构。我们的模型通过一种新颖的可学习小波变换,整合了时域和频域特征,从而增强了特征提取能力。特别是其中的WaveletConv模块,这是一个采用深度可分离卷积的多级小波分解层,确保了模型的效率和紧凑性。WaveFormer仅包含310万个参数,在EPN612数据集上实现了95%的分类准确率,性能优于更大的模型。此外,在配备英特尔CPU的笔记本电脑上进行性能分析时,INT8量化实现了实时部署,推理延迟仅为6.75毫秒。