In this paper, we study the behavior of the Upper Confidence Bound-Variance (UCB-V) algorithm for the Multi-Armed Bandit (MAB) problems, a variant of the canonical Upper Confidence Bound (UCB) algorithm that incorporates variance estimates into its decision-making process. More precisely, we provide an asymptotic characterization of the arm-pulling rates for UCB-V, extending recent results for the canonical UCB in Kalvit and Zeevi (2021) and Khamaru and Zhang (2024). In an interesting contrast to the canonical UCB, our analysis reveals that the behavior of UCB-V can exhibit instability, meaning that the arm-pulling rates may not always be asymptotically deterministic. Besides the asymptotic characterization, we also provide non-asymptotic bounds for the arm-pulling rates in the high probability regime, offering insights into the regret analysis. As an application of this high probability result, we establish that UCB-V can achieve a more refined regret bound, previously unknown even for more complicate and advanced variance-aware online decision-making algorithms.
翻译:本文研究了多臂赌博机问题中上置信界-方差算法的行为,该算法是经典上置信界算法的变体,在其决策过程中引入了方差估计。具体而言,我们给出了UCB-V算法臂选择次数的渐近刻画,扩展了Kalvit与Zeevi以及Khamaru与Zhang近期对经典UCB算法的研究结果。与经典UCB形成有趣对比的是,我们的分析表明UCB-V的行为可能表现出不稳定性,即臂选择次数并不总是渐近确定的。除渐近刻画外,我们还给出了高概率情形下臂选择次数的非渐近界,为遗憾分析提供了新的见解。作为该高概率结果的应用,我们证明UCB-V能够达到更精细的遗憾界,这一结论此前即使在更复杂先进的方差感知在线决策算法中也未被发现。