We introduce a two-stage multitask learning framework for analyzing Electroencephalography (EEG) signals that integrates denoising, dynamical modeling, and representation learning. In the first stage, a denoising autoencoder is trained to suppress artifacts and stabilize temporal dynamics, providing robust signal representations. In the second stage, a multitask architecture processes these denoised signals to achieve three objectives: motor imagery classification, chaotic versus non-chaotic regime discrimination using Lyapunov exponent-based labels, and self-supervised contrastive representation learning with NT-Xent loss. A convolutional backbone combined with a Transformer encoder captures spatial-temporal structure, while the dynamical task encourages sensitivity to nonlinear brain dynamics. This staged design mitigates interference between reconstruction and discriminative goals, improves stability across datasets, and supports reproducible training by clearly separating noise reduction from higher-level feature learning. Empirical studies show that our framework not only enhances robustness and generalization but also surpasses strong baselines and recent state-of-the-art methods in EEG decoding, highlighting the effectiveness of combining denoising, dynamical features, and self-supervised learning.
翻译:我们提出了一种用于分析脑电图信号的两阶段多任务学习框架,该框架集成了去噪、动力学建模与表征学习。在第一阶段,训练一个去噪自编码器以抑制伪影并稳定时序动力学,从而提供鲁棒的信号表征。在第二阶段,一个多任务架构处理这些去噪后的信号以实现三个目标:运动想象分类、基于李雅普诺夫指数标签的混沌与非混沌状态判别,以及使用NT-Xent损失的自监督对比表征学习。卷积主干网络与Transformer编码器相结合以捕捉时空结构,而动力学任务则增强了对非线性脑动力学的敏感性。这种分阶段设计减轻了重构任务与判别任务之间的相互干扰,提升了跨数据集的稳定性,并通过明确分离噪声抑制与高层特征学习来支持可复现的训练。实证研究表明,我们的框架不仅增强了鲁棒性和泛化能力,而且在脑电图解码任务中超越了强基线方法和近期的最先进方法,突显了结合去噪、动力学特征与自监督学习的有效性。