Machine learning offers promising methods for processing signals recorded with wearable devices such as surface electromyography (sEMG) and electroencephalography (EEG). However, in these applications, despite high within-session performance, intersession performance is hindered by electrode shift, a known issue across modalities. Existing solutions often require large and expensive datasets and/or lack robustness and interpretability. Thus, we propose the Spatial Adaptation Layer (SAL), which can be applied to any biosignal array model and learns a parametrized affine transformation at the input between two recording sessions. We also introduce learnable baseline normalization (LBN) to reduce baseline fluctuations. Tested on two HD-sEMG gesture recognition datasets, SAL and LBN outperformed standard fine-tuning on regular arrays, achieving competitive performance even with a logistic regressor, with orders of magnitude less, physically interpretable parameters. Our ablation study showed that forearm circumferential translations account for the majority of performance improvements.
翻译:机器学习为处理可穿戴设备(如表面肌电图(sEMG)和脑电图(EEG))记录的信号提供了前景广阔的方法。然而,在这些应用中,尽管会话内性能优异,会话间性能却因电极偏移这一跨模态常见问题而受到限制。现有解决方案通常需要大规模且昂贵的数据集,和/或缺乏鲁棒性与可解释性。为此,我们提出空间适应层(SAL),该层可应用于任意生物信号阵列模型,并在输入层学习两个记录会话间的参数化仿射变换。我们还引入了可学习基线归一化(LBN)以减少基线波动。在两个高密度sEMG手势识别数据集上的测试表明,SAL与LBN在常规阵列上超越了标准微调方法,即使使用逻辑回归器也能获得具有竞争力的性能,且所需参数量减少了数个数量级并具有物理可解释性。我们的消融研究表明,前臂周向平移是性能提升的主要贡献因素。