Bayesian optimization (BO) is a widely used approach to hyperparameter optimization (HPO). However, most existing HPO methods only incorporate expert knowledge during initialization, limiting practitioners' ability to influence the optimization process as new insights emerge. This limits the applicability of BO in iterative machine learning development workflows. We propose DynaBO, a BO framework that enables continuous user control of the optimization process. Over time, DynaBO leverages provided user priors by augmenting the acquisition function with decaying, prior-weighted preferences while preserving asymptotic convergence guarantees. To reinforce robustness, we introduce a data-driven safeguard that detects and can be used to reject misleading priors. We prove theoretical results on near-certain convergence, robustness to adversarial priors, and accelerated convergence when informative priors are provided. Extensive experiments across various HPO benchmarks show that DynaBO consistently outperforms our state-of-the-art competitors across all benchmarks and for all prior kinds. Our results demonstrate that DynaBO enables reliable and efficient collaborative BO, bridging automated and manually controlled model development.
翻译:贝叶斯优化(BO)是超参数优化(HPO)中广泛采用的方法。然而,现有大多数HPO方法仅在初始化阶段融入专家知识,限制了从业者在新见解出现时影响优化过程的能力,这制约了BO在迭代式机器学习开发工作流中的适用性。我们提出DynaBO框架,该框架支持用户对优化过程进行持续控制。DynaBO通过向采集函数添加具有衰减特性的先验加权偏好,随时间推移有效利用用户提供的先验知识,同时保持渐近收敛性保证。为增强鲁棒性,我们引入数据驱动的安全机制,可检测并拒绝误导性先验。我们证明了以下理论结果:近乎确定的收敛性、对对抗性先验的鲁棒性,以及在提供信息性先验时的加速收敛。在多种HPO基准测试上的大量实验表明,DynaBO在所有基准测试和各类先验条件下均持续优于当前最先进的对比方法。我们的研究证明,DynaBO能够实现可靠高效的协作式贝叶斯优化,在自动化与人工控制的模型开发之间架起桥梁。