Meta-learning, i.e., "learning to learn", is a promising approach to enable efficient BCI classifier training with limited amounts of data. It can effectively use collections of in some way similar classification tasks, with rapid adaptation to new tasks where only minimal data are available. However, applying meta-learning to existing classifiers and BCI tasks requires significant effort. To address this issue, we propose EEG-Reptile, an automated library that leverages meta-learning to improve classification accuracy of neural networks in BCIs and other EEG-based applications. It utilizes the Reptile meta-learning algorithm to adapt neural network classifiers of EEG data to the inter-subject domain, allowing for more efficient fine-tuning for a new subject on a small amount of data. The proposed library incorporates an automated hyperparameter tuning module, a data management pipeline, and an implementation of the Reptile meta-learning algorithm. EEG-Reptile automation level allows using it without deep understanding of meta-learning. We demonstrate the effectiveness of EEG-Reptile on two benchmark datasets (BCI IV 2a, Lee2019 MI) and three neural network architectures (EEGNet, FBCNet, EEG-Inception). Our library achieved improvement in both zero-shot and few-shot learning scenarios compared to traditional transfer learning approaches.
翻译:元学习,即“学会学习”,是一种有望在数据量有限的情况下实现高效脑机接口分类器训练的方法。它能够有效利用一系列在某种程度上相似的分类任务,并快速适应仅有极少数据可用的新任务。然而,将元学习应用于现有分类器和脑机接口任务需要付出大量努力。为解决此问题,我们提出了EEG-Reptile,这是一个利用元学习来提高脑机接口及其他基于脑电的应用中神经网络分类精度的自动化库。它利用Reptile元学习算法,使脑电数据的神经网络分类器能够适应跨被试领域,从而允许基于少量数据为新被试进行更高效的微调。所提出的库包含一个自动化超参数调优模块、一个数据管理流水线以及Reptile元学习算法的实现。EEG-Reptile的自动化水平使得用户无需深入理解元学习即可使用。我们在两个基准数据集(BCI IV 2a, Lee2019 MI)和三种神经网络架构(EEGNet, FBCNet, EEG-Inception)上验证了EEG-Reptile的有效性。与传统迁移学习方法相比,我们的库在零样本和小样本学习场景下均取得了性能提升。