Transfer learning (TL) has been widely used in electroencephalogram (EEG)-based brain-computer interfaces (BCIs) for reducing calibration efforts. However, backdoor attacks could be introduced through TL. In such attacks, an attacker embeds a backdoor with a specific pattern into the machine learning model. As a result, the model will misclassify a test sample with the backdoor trigger into a prespecified class while still maintaining good performance on benign samples. Accordingly, this study explores backdoor attacks in the TL of EEG-based BCIs, where source-domain data are poisoned by a backdoor trigger and then used in TL. We propose several active poisoning approaches to select source-domain samples, which are most effective in embedding the backdoor pattern, to improve the attack success rate and efficiency. Experiments on four EEG datasets and three deep learning models demonstrate the effectiveness of the approaches. To our knowledge, this is the first study about backdoor attacks on TL models in EEG-based BCIs. It exposes a serious security risk in BCIs, which should be immediately addressed.
翻译:迁移学习(TL)已广泛应用于基于脑电图(EEG)的脑机接口(BCI)中以减少校准工作量。然而,后门攻击可能通过TL被引入。在此类攻击中,攻击者将带有特定模式的后门嵌入机器学习模型。其结果是,模型会将带有后门触发器的测试样本错误分类到预设类别,同时在良性样本上仍保持良好的性能。因此,本研究探讨了基于EEG的BCI迁移学习中的后门攻击,其中源域数据被后门触发器污染,随后用于迁移学习。我们提出了几种主动投毒方法,用于选择对嵌入后门模式最有效的源域样本,以提高攻击成功率和效率。在四个EEG数据集和三个深度学习模型上的实验证明了这些方法的有效性。据我们所知,这是首个关于基于EEG的BCI中TL模型后门攻击的研究。它揭示了BCI中一个严重的安全风险,亟待解决。