Accurate decoding of lower-limb motion from EEG signals is essential for advancing brain-computer interface (BCI) applications in movement intent recognition and control. This study presents NeuroDyGait, a two-stage, phase-aware EEG-to-gait decoding framework that explicitly models temporal continuity and domain relationships. To address challenges of causal, phase-consistent prediction and cross-subject variability, Stage I learns semantically aligned EEG-motion embeddings via relative contrastive learning with a cross-attention-based metric, while Stage II performs domain relation-aware decoding through dynamic fusion of session-specific heads. Comprehensive experiments on two benchmark datasets (GED and FMD) show substantial gains over baselines, including a recent 2025 model EEG2GAIT. The framework generalizes to unseen subjects and maintains inference latency below 5 ms per window, satisfying real-time BCI requirements. Visualization of learned attention and phase-specific cortical saliency maps further reveals interpretable neural correlates of gait phases. Future extensions will target rehabilitation populations and multimodal integration.
翻译:从脑电图信号中准确解码下肢运动对于推进脑机接口在运动意图识别与控制中的应用至关重要。本研究提出NeuroDyGait,一个两阶段、相位感知的脑电图到步态解码框架,该框架显式建模了时间连续性与领域关系。为应对因果性、相位一致性预测及跨被试变异性的挑战,第一阶段通过基于交叉注意力的度量进行相对对比学习,以学习语义对齐的脑电图-运动嵌入;第二阶段则通过动态融合会话特定头来执行领域关系感知的解码。在两个基准数据集(GED与FMD)上的综合实验表明,该方法相较于基线模型(包括2025年提出的EEG2GAIT模型)取得了显著提升。该框架能够泛化至未见过的被试,并保持每窗口低于5毫秒的推理延迟,满足实时脑机接口要求。对学习到的注意力及相位特异性皮层显著性图的可视化进一步揭示了步态相位可解释的神经关联。未来扩展将面向康复人群及多模态集成。