Subject-specific distribution shifts represent a fundamental obstacle to developing foundation models for brain decoding. We propose the Subject-Specific Low-Rank Adapter (SuLoRA), a drop-in replacement for standard linear or convolutional layers that captures inter-subject variability by decomposing weights into a shared, subject-invariant component and a lightweight, low-rank correction unique to each subject. This explicit separation enables existing architectures to become robust to subject shifts without architectural redesign. We evaluate SuLoRA on MEG speech perception and EEG motor imagery tasks across CNN and transformer architectures. In the speech decoding task, SuLoRA exceeds the baseline performance with half of the parameters. On motor imagery dataset, SuLoRA outperforms both subject-agnostic models and independently trained subject-specific models. SuLoRA offers a practical path towards effective cross-subject foundation models for brain signal applications.
翻译:受试者特异性分布偏移是开发脑解码基础模型的一个根本性障碍。我们提出了特定受试者低秩适配器(SuLoRA),它是标准线性层或卷积层的即插即用替代方案,通过将权重分解为一个共享的、受试者不变的分量和一个轻量级的、每个受试者独有的低秩修正项,来捕捉受试者间的变异性。这种显式分离使得现有架构无需重新设计即可对受试者偏移具有鲁棒性。我们在MEG语音感知和EEG运动想象任务上,针对CNN和Transformer架构评估了SuLoRA。在语音解码任务中,SuLoRA以一半的参数超越了基线性能。在运动想象数据集上,SuLoRA的表现优于与受试者无关的模型以及独立训练的特定受试者模型。SuLoRA为脑信号应用实现有效的跨受试者基础模型提供了一条实用路径。