Bayesian optimization (BO) is a popular technique for sample-efficient optimization of black-box functions. In many applications, the parameters being tuned come with a carefully engineered default configuration, and practitioners only want to deviate from this default when necessary. Standard BO, however, does not aim to minimize deviation from the default and, in practice, often pushes weakly relevant parameters to the boundary of the search space. This makes it difficult to distinguish between important and spurious changes and increases the burden of vetting recommendations when the optimization objective omits relevant operational considerations. We introduce BONSAI, a default-aware BO policy that prunes low-impact deviations from a default configuration while explicitly controlling the loss in acquisition value. BONSAI is compatible with a variety of acquisition functions, including expected improvement and upper confidence bound (GP-UCB). We theoretically bound the regret incurred by BONSAI, showing that, under certain conditions, it enjoys the same no-regret property as vanilla GP-UCB. Across many real-world applications, we empirically find that BONSAI substantially reduces the number of non-default parameters in recommended configurations while maintaining competitive optimization performance, with little effect on wall time.
翻译:贝叶斯优化(BO)是一种用于黑箱函数样本高效优化的常用技术。在许多应用中,待调优参数通常附带经过精心设计的默认配置,实践者仅希望在必要时才偏离此默认设置。然而,标准BO并不旨在最小化与默认设置的偏差,且在实践中常将弱相关参数推至搜索空间边界。这使得区分重要变更与伪变更变得困难,并在优化目标忽略相关操作考量时增加了验证建议的负担。本文提出BONSAI——一种具备默认感知能力的BO策略,该策略在显式控制采集函数值损失的同时,能够剪除与默认配置的低影响偏差。BONSAI兼容多种采集函数,包括期望改进和上置信界(GP-UCB)。我们从理论上界定了BONSAI产生的遗憾,证明其在特定条件下具有与原始GP-UCB相同的无遗憾特性。在众多实际应用场景中,我们通过实证发现BONSAI在保持竞争力的优化性能同时,能显著减少推荐配置中非默认参数的数量,且对实际计算时间影响甚微。