Brain-computer interfaces (BCIs) provide a hands-free control modality for mobile robotics, yet decoding user intent during real-world navigation remains challenging. This work presents a brain-robot control framework for offline decoding of driving commands during robotic rover operation. A 4WD Rover Pro platform was remotely operated by 12 participants who navigated a predefined route using a joystick, executing the commands forward, reverse, left, right, and stop. Electroencephalogram (EEG) signals were recorded with a 16-channel OpenBCI cap and aligned with motor actions at Delta = 0 ms and future prediction horizons (Delta > 0 ms). After preprocessing, several deep learning models were benchmarked, including convolutional neural networks, recurrent neural networks, and Transformer architectures. ShallowConvNet achieved the highest performance for both action prediction and intent prediction. By combining real-world robotic control with multi-horizon EEG intention decoding, this study introduces a reproducible benchmark and reveals key design insights for predictive deep learning-based BCI systems.
翻译:脑机接口为移动机器人提供了一种无需手动的控制方式,然而在真实世界导航过程中解码用户意图仍具挑战性。本研究提出了一种脑-机器人控制框架,用于在机器人探测车操作期间离线解码驾驶指令。12名参与者通过操纵杆远程操控一台4WD Rover Pro平台,沿预设路线执行前进、后退、左转、右转及停止指令。使用16通道OpenBCI帽采集脑电图信号,并将其与Δ=0毫秒的即时动作及未来预测时域(Δ>0毫秒)的动作进行对齐。经过预处理后,对包括卷积神经网络、循环神经网络和Transformer架构在内的多种深度学习模型进行了基准测试。ShallowConvNet在动作预测和意图预测任务中均取得了最佳性能。通过将真实世界机器人控制与多时域脑电意图解码相结合,本研究提出了一个可复现的基准测试框架,并为基于预测性深度学习的脑机接口系统揭示了关键设计启示。