Mixed Reality (MR) interfaces increasingly rely on gaze for interaction , yet distinguishing visual attention from intentional action remains difficult, leading to the Midas Touch problem. Existing solutions require explicit confirmations, while brain-computer interfaces may provide an implicit marker of intention using Stimulus-Preceding Negativity (SPN). We investigated how Intention (Select vs. Observe) and Feedback (With vs. Without) modulate SPN during gaze-based MR interactions. During realistic selection tasks, we acquired EEG and eye-tracking data from 28 participants. SPN was robustly elicited and sensitive to both factors: observation without feedback produced the strongest amplitudes, while intention to select and expectation of feedback reduced activity, suggesting SPN reflects anticipatory uncertainty rather than motor preparation. Complementary decoding with deep learning models achieved reliable person-dependent classification of user intention, with accuracies ranging from 75% to 97% across participants. These findings identify SPN as an implicit marker for building intention-aware MR interfaces that mitigate the Midas Touch.
翻译:混合现实(MR)界面日益依赖注视进行交互,然而区分视觉注意与有意行动仍然困难,导致了“点石成金”(Midas Touch)问题。现有解决方案需要显式确认,而脑机接口可能利用刺激前负波(SPN)提供一种隐式的意图标记。我们研究了在基于注视的MR交互过程中,意图(选择 vs. 观察)和反馈(有 vs. 无)如何调制SPN。在逼真的选择任务中,我们采集了28名参与者的脑电图和眼动追踪数据。SPN被稳定地诱发,并对两个因素均表现出敏感性:无反馈的观察产生了最强的波幅,而选择意图和对反馈的预期则降低了活动,这表明SPN反映的是预期不确定性而非运动准备。利用深度学习模型进行的互补解码实现了对用户意图的可靠个体化分类,不同参与者的准确率在75%至97%之间。这些发现将SPN确立为一种隐式标记,可用于构建意图感知的MR界面以缓解“点石成金”问题。