Bayesian Optimization is critically vulnerable to extreme outliers. Existing provably robust methods typically assume a bounded cumulative corruption budget, which makes them defenseless against even a single corruption of sufficient magnitude. To address this, we introduce a new adversary whose budget is only bounded in the frequency of corruptions, not in their magnitude. We then derive RCGP-UCB, an algorithm coupling the famous upper confidence bound (UCB) approach with a Robust Conjugate Gaussian Process (RCGP). We present stable and adaptive versions of RCGP-UCB, and prove that they achieve sublinear regret in the presence of up to $O(T^{1/4})$ and $O(T^{1/7})$ corruptions with possibly infinite magnitude. This robustness comes at near zero cost: without outliers, RCGP-UCB's regret bounds match those of the standard GP-UCB algorithm.
翻译:贝叶斯优化对极端离群值具有严重的脆弱性。现有可证明鲁棒的方法通常假设累积扰动预算有界,这使得它们即使面对单个足够大幅度的扰动也毫无防御能力。为解决此问题,我们引入了一种新的对抗模型,其预算仅受扰动频率的约束,而不受扰动幅度的限制。随后我们推导出RCGP-UCB算法,该算法将著名的置信上界(UCB)方法与鲁棒共轭高斯过程(RCGP)相结合。我们提出了RCGP-UCB的稳定版本和自适应版本,并证明在存在多达$O(T^{1/4})$和$O(T^{1/7})$个可能具有无限幅度的扰动时,这些算法仍能实现次线性遗憾。这种鲁棒性几乎不带来性能损失:在没有离群值的情况下,RCGP-UCB的遗憾界与标准GP-UCB算法保持一致。