Decoding EEG during motor imagery is pivotal for the Brain-Computer Interface (BCI) system, influencing its overall performance significantly. As end-to-end data-driven learning methods advance, the challenge lies in balancing model complexity with the need for human interpretability and trust. Despite strides in EEG-based BCIs, challenges like artefacts and low signal-to-noise ratio emphasise the ongoing importance of model transparency. This work proposes using post-hoc explanations to interpret model outcomes and validate them against domain knowledge. Leveraging the GradCAM post-hoc explanation technique on the motor imagery dataset, this work demonstrates that relying solely on accuracy metrics may be inadequate to ensure BCI performance and acceptability. A model trained using all EEG channels of the dataset achieves 72.60% accuracy, while a model trained with motor-imagery/movement-relevant channel data has a statistically insignificant decrease of 1.75%. However, the relevant features for both are very different based on neurophysiological facts. This work demonstrates that integrating domain-specific knowledge with XAI techniques emerges as a promising paradigm for validating the neurophysiological basis of model outcomes in BCIs. Our results reveal the significance of neurophysiological validation in evaluating BCI performance, highlighting the potential risks of exclusively relying on performance metrics when selecting models for dependable and transparent BCIs.
翻译:运动想象期间的脑电解码对于脑机接口系统至关重要,其性能直接影响整体表现。随着端到端数据驱动学习方法的进步,如何在模型复杂度与人类可解释性及信任需求之间取得平衡成为挑战。尽管基于脑电的脑机接口取得了进展,但伪影和低信噪比等问题凸显了模型透明性的持续重要性。本研究提出利用事后解释来解读模型输出,并基于领域知识对其进行验证。通过在运动想象数据集上应用GradCAM事后解释技术,本研究表明仅依赖准确率指标可能不足以确保脑机接口的性能与可接受性。使用数据集中所有脑电通道训练的模型准确率达72.60%,而仅使用运动想象/运动相关通道数据训练的模型准确率下降1.75%,但统计上不显著。然而,基于神经生理学事实,两者的相关特征存在显著差异。本研究证明,将领域特定知识与可解释人工智能技术相结合,是验证脑机接口模型输出神经生理学基础的一种有前景的范式。我们的结果揭示了神经生理学验证在评估脑机接口性能中的重要性,并强调了在选取模型以构建可靠且透明的脑机接口时,仅依赖性能指标的潜在风险。