Mixed Reality (MR) interfaces increasingly rely on gaze for interaction , yet distinguishing visual attention from intentional action remains difficult, leading to the Midas Touch problem. Existing solutions require explicit confirmations, while brain-computer interfaces may provide an implicit marker of intention using Stimulus-Preceding Negativity (SPN). We investigated how Intention (Select vs. Observe) and Feedback (With vs. Without) modulate SPN during gaze-based MR interactions. During realistic selection tasks, we acquired EEG and eye-tracking data from 28 participants. SPN was robustly elicited and sensitive to both factors: observation without feedback produced the strongest amplitudes, while intention to select and expectation of feedback reduced activity, suggesting SPN reflects anticipatory uncertainty rather than motor preparation. Complementary decoding with deep learning models achieved reliable person-dependent classification of user intention, with accuracies ranging from 75% to 97% across participants. These findings identify SPN as an implicit marker for building intention-aware MR interfaces that mitigate the Midas Touch.
翻译:混合现实(MR)界面日益依赖注视进行交互,然而区分视觉注意与意图性动作仍然困难,这导致了米达斯接触问题。现有解决方案需要显式确认,而脑机接口可能通过刺激前负性(SPN)提供意图的隐性标记。我们研究了在基于注视的MR交互过程中,意图(选择 vs. 观察)与反馈(有 vs. 无)如何调节SPN。通过在真实场景的选择任务中采集28名参与者的脑电图与眼动追踪数据,我们发现SPN被稳定诱发且对两个因素均敏感:无反馈的观察产生最强振幅,而选择意图与反馈预期会降低神经活动,表明SPN反映的是预期不确定性而非运动准备。采用深度学习模型的互补解码实现了用户意图的可靠个性化分类,参与者间准确率在75%至97%之间。这些发现确立了SPN作为构建意图感知型MR界面以缓解米达斯接触问题的隐性标记。