Extended Reality (XR) interfaces impose both ergonomic and cognitive demands, yet current systems often force a binary choice between hand-based input, which can produce fatigue, and gaze-based input, which is vulnerable to the Midas Touch problem and precision limitations. We introduce the xr-adaptive-modality-2025 platform, a web-based open-source framework for studying whether modality-specific adaptive interventions can improve XR-relevant pointing performance and reduce workload relative to static unimodal interaction. The platform combines physiologically informed gaze simulation, an ISO 9241-9 multidirectional tapping task, and two modality-specific adaptive interventions: gaze declutter and hand target-width inflation. We evaluated the system in a 2 x 2 x 2 within-subjects design manipulating Modality (Hand vs. Gaze), UI Mode (Static vs. Adaptive), and Pressure (Yes vs. No). Results from N=69 participants show that hand yielded higher throughput than gaze (5.17 vs. 4.73 bits/s), lower error (1.8% vs. 19.1%), and lower NASA-TLX workload. Crucially, error profiles differed sharply by modality: gaze errors were predominantly slips (99.2%), whereas hand errors were predominantly misses (95.7%), consistent with the Midas Touch account. Of the two adaptive interventions, only gaze declutter executed in this dataset; it modestly reduced timeouts but not slips. Hand width inflation was not evaluable due to a UI integration bug. These findings reveal modality-specific failure modes with direct implications for adaptive policy design, and establish the platform as a reproducible infrastructure for future studies.
翻译:扩展现实(XR)界面同时带来人体工程学与认知层面的需求,然而现有系统往往强制用户在基于手部的输入(易引发疲劳)与基于凝视的输入(易受米达斯接触问题及精度限制影响)之间做出二元选择。本文介绍xr-adaptive-modality-2025平台——一个基于网络的开源框架,旨在研究相较于静态单模态交互,模态特异性自适应干预能否提升XR相关指向性能并降低工作负荷。该平台整合了生理学启发的凝视模拟、ISO 9241-9多方向点击任务,以及两种模态特异性自适应干预:凝视界面简化与手部目标宽度扩展。我们通过2×2×2被试内设计评估系统,操纵变量包括模态(手部 vs. 凝视)、界面模式(静态 vs. 自适应)和压力条件(有 vs. 无)。69名参与者的实验结果表明:手部模态比凝视模态具有更高吞吐量(5.17 vs. 4.73比特/秒)、更低错误率(1.8% vs. 19.1%)及更低的NASA-TLX工作负荷。关键发现在于错误模式呈现显著的模态差异:凝视错误主要为误触发(99.2%),而手部错误主要为未命中(95.7%),这与米达斯接触理论完全吻合。在两种自适应干预中,仅凝视界面简化在本数据集中得以执行;该干预虽适度减少了超时错误,但未改善误触发问题。手部宽度扩展因界面集成缺陷未能完成评估。这些发现揭示了模态特异性失效模式对自适应策略设计的直接影响,并确立了该平台作为未来研究可复现基础架构的价值。