Human-in-the-loop optimization identifies optimal interface designs by iteratively observing user performance. However, it often requires numerous iterations due to the lack of prior information. While recent approaches have accelerated this process by leveraging previous optimization data, collecting user data remains costly and often impractical. We present a conceptual framework, Human-in-the-Loop Optimization with Model-Informed Priors (HOMI), which augments human-in-the-loop optimization with a training phase where the optimizer learns adaptation strategies from diverse, synthetic user data generated with predictive models before deployment. To realize HOMI, we introduce Neural Acquisition Function+ (NAF+), a Bayesian optimization method featuring a neural acquisition function trained with reinforcement learning. NAF+ learns optimization strategies from large-scale synthetic data, improving efficiency in real-time optimization with users. We evaluate HOMI and NAF+ with mid-air keyboard optimization, a representative VR input task. Our work presents a new approach for more efficient interface adaptation by bridging in situ and in silico optimization processes.
翻译:人机协同优化通过迭代观察用户表现来识别最优界面设计。然而,由于缺乏先验信息,该方法通常需要大量迭代。虽然现有方法通过利用历史优化数据加速了这一过程,但收集用户数据成本高昂且往往难以实现。本文提出一个概念性框架——基于模型先验的人机协同优化(HOMI),该框架通过在部署前增加训练阶段来增强人机协同优化能力:优化器利用预测模型生成的大规模合成用户数据,从中学习适应策略。为实现HOMI,我们提出神经采集函数+(NAF+)——一种采用强化学习训练神经采集函数的贝叶斯优化方法。NAF+能够从大规模合成数据中学习优化策略,从而提升实时用户优化的效率。我们以代表性虚拟现实输入任务——空中键盘优化为例,对HOMI与NAF+进行评估。本研究通过融合现场优化与计算模拟优化过程,为界面自适应提供了一种更高效的新途径。