This study examines the effectiveness of traditional machine learning classifiers versus deep learning models for detecting the imagined speech using electroencephalogram data. Specifically, we evaluated conventional machine learning techniques such as CSP-SVM and LDA-SVM classifiers alongside deep learning architectures such as EEGNet, ShallowConvNet, and DeepConvNet. Machine learning classifiers exhibited significantly lower precision and recall, indicating limited feature extraction capabilities and poor generalization between imagined speech and idle states. In contrast, deep learning models, particularly EEGNet, achieved the highest accuracy of 0.7080 and an F1 score of 0.6718, demonstrating their enhanced ability in automatic feature extraction and representation learning, essential for capturing complex neurophysiological patterns. These findings highlight the limitations of conventional machine learning approaches in brain-computer interface (BCI) applications and advocate for adopting deep learning methodologies to achieve more precise and reliable classification of detecting imagined speech. This foundational research contributes to the development of imagined speech-based BCI systems.
翻译:本研究探讨了传统机器学习分类器与深度学习模型在利用脑电图数据检测想象语音方面的有效性。具体而言,我们评估了传统机器学习技术(如CSP-SVM和LDA-SVM分类器)以及深度学习架构(如EEGNet、ShallowConvNet和DeepConvNet)。机器学习分类器表现出显著较低的精确率和召回率,表明其在特征提取能力方面存在局限,且在想象语音与空闲状态之间的泛化能力较差。相比之下,深度学习模型,特别是EEGNet,取得了最高的准确率(0.7080)和F1分数(0.6718),证明了其在自动特征提取和表征学习方面的增强能力,这对于捕捉复杂的神经生理模式至关重要。这些发现突显了传统机器学习方法在脑机接口应用中的局限性,并主张采用深度学习方法以实现更精确、可靠的想象语音检测分类。这项基础性研究为开发基于想象语音的脑机接口系统做出了贡献。