Electroencephalogram (EEG) signals are critical for detecting abnormal brain activity, but their high dimensionality and complexity pose significant challenges for effective analysis. In this paper, we propose CAE-T, a novel framework that combines a channelwise CNN-based autoencoder with a single-head transformer classifier for efficient EEG abnormality detection. The channelwise autoencoder compresses raw EEG signals while preserving channel independence, reducing computational costs and retaining biologically meaningful features. The compressed representations are then fed into the transformer-based classifier, which efficiently models long-term dependencies to distinguish between normal and abnormal signals. Evaluated on the TUH Abnormal EEG Corpus, the proposed model achieves 85.0% accuracy, 76.2% sensitivity, and 91.2% specificity at the per-case level, outperforming baseline models such as EEGNet, Deep4Conv, and FusionCNN. Furthermore, CAE-T requires only 202M FLOPs and 2.9M parameters, making it significantly more efficient than transformer-based alternatives. The framework retains interpretability through its channelwise design, demonstrating great potential for future applications in neuroscience research and clinical practice. The source code is available at https://github.com/YossiZhao/CAE-T.
翻译:脑电图(EEG)信号对于检测大脑异常活动至关重要,但其高维性和复杂性给有效分析带来了重大挑战。本文提出CAE-T,一种新颖的框架,将基于通道的CNN自编码器与单头Transformer分类器相结合,用于高效的EEG异常检测。该通道级自编码器在压缩原始EEG信号的同时保持通道独立性,既降低了计算成本,又保留了具有生物学意义的特征。压缩后的表示随后输入基于Transformer的分类器,该分类器能有效建模长期依赖关系以区分正常与异常信号。在TUH异常EEG语料库上的评估表明,所提模型在单病例层面达到85.0%的准确率、76.2%的敏感度和91.2%的特异性,其性能优于EEGNet、Deep4Conv和FusionCNN等基线模型。此外,CAE-T仅需202M FLOPs和2.9M参数,显著优于其他基于Transformer的替代方案。该框架通过其通道级设计保持了可解释性,在神经科学研究和临床实践中展现出巨大的应用潜力。源代码发布于https://github.com/YossiZhao/CAE-T。