Online learning algorithms often faces a fundamental trilemma: balancing regret guarantees between adversarial and stochastic settings and providing baseline safety against a fixed comparator. While existing methods excel in one or two of these regimes, they typically fail to unify all three without sacrificing optimal rates or requiring oracle access to problem-dependent parameters. In this work, we bridge this gap by introducing COMPASS-Hedge. Our algorithm is the first full-information method to simultaneously achieve: i) Minimax-optimal regret in adversarial environments; ii) Instance-optimal, gap-dependent regret in stochastic environments; and iii) $\tilde{\mathcal{O}}(1)$ regret relative to a designated baseline policy, up to logarithmic factors. Crucially, COMPASS-Hedge is parameter-free and requires no prior knowledge of the environment's nature or the magnitude of the stochastic sub optimality gaps. Our approach hinges on a novel integration of adaptive pseudo-regret scaling and phase-based aggression, coupled with a comparator-aware mixing strategy. To the best of our knowledge, this provides the first "best-of-three-world" guarantee in the full-information setting, establishing that baseline safety does not have to come at the cost of worst-case robustness or stochastic efficiency.
翻译:在线学习算法常面临一个基本的三元困境:在对抗性和随机性设定之间平衡遗憾界,并为固定比较器提供基线安全性。尽管现有方法在一到两个领域表现出色,但它们通常无法在不牺牲最优速率或访问问题相关参数预言机的前提下统一所有三个方面。在这项工作中,我们通过引入COMPASS-Hedge弥补了这一空白。我们的算法是首个同时实现以下目标的完全信息方法:i) 对抗性环境下极小化最优遗憾界;ii) 随机环境下实例最优、依赖间隔的遗憾界;以及iii) 相对于指定基线策略的$\tilde{\mathcal{O}}(1)$遗憾界(忽略对数因子)。关键的是,COMPASS-Hedge无需参数,且不需要事先了解环境的性质或随机次优间隔的幅度。我们的方法依赖于自适应伪遗憾缩放与基于阶段的激进策略的新颖整合,并结合了比较器感知的混合策略。据我们所知,这提供了完全信息设定下的首个“三世界最优”保证,确立了基线安全性不必以牺牲最坏情况鲁棒性或随机效率为代价。