We address the challenge of sequential data-driven decision-making under context distributional uncertainty. This problem arises in numerous real-world scenarios where the learner optimizes black-box objective functions in the presence of uncontrollable contextual variables. We consider the setting where the context distribution is uncertain but known to lie within an ambiguity set defined as a ball in the Wasserstein distance. We propose a novel algorithm for Wasserstein Distributionally Robust Bayesian Optimization that can handle continuous context distributions while maintaining computational tractability. Our theoretical analysis combines recent results in self-normalized concentration in Hilbert spaces and finite-sample bounds for distributionally robust optimization to establish sublinear regret bounds that match state-of-the-art results. Through extensive comparisons with existing approaches on both synthetic and real-world problems, we demonstrate the simplicity, effectiveness, and practical applicability of our proposed method.
翻译:我们解决了在上下文分布不确定性下的序列化数据驱动决策挑战。该问题出现在众多现实场景中,即学习者在存在不可控上下文变量的情况下优化黑盒目标函数。我们考虑上下文分布不确定但已知位于以Wasserstein距离定义的模糊集球内的设定。我们提出了一种新颖的Wasserstein分布鲁棒贝叶斯优化算法,该算法能够处理连续上下文分布,同时保持计算可处理性。我们的理论分析结合了希尔伯特空间中自归一化集中性的最新成果和分布鲁棒优化的有限样本界,建立了与最先进结果相匹配的次线性遗憾界。通过在合成问题和现实问题上与现有方法进行广泛比较,我们证明了所提方法的简洁性、有效性和实际适用性。