When multi-armed bandit (MAB) algorithms allocate pulls among competing arms, the resulting allocation can exhibit huge variation. This is particularly harmful in modern applications such as learning-enhanced platform operations and post-bandit statistical inference. Thus motivated, we introduce a new performance metric of MAB algorithms termed allocation variability, which is the largest (over arms) standard deviation of an arm's number of pulls. We establish a fundamental trade-off between allocation variability and regret, the canonical performance metric of reward maximization. In particular, for any algorithm, the worst-case regret $R_T$ and worst-case allocation variability $S_T$ must satisfy $R_T \cdot S_T=Ω(T^{\frac{3}{2}})$ as $T\rightarrow\infty$, as long as $R_T=o(T)$. This indicates that any minimax regret-optimal algorithm must incur worst-case allocation variability $Θ(T)$, the largest possible scale; while any algorithm with sublinear worst-case regret must necessarily incur ${S}_T= ω(\sqrt{T})$. We further show that this lower bound is essentially tight, and that any point on the Pareto frontier $R_T \cdot S_T=\tildeΘ(T^{3/2})$ can be achieved by a simple tunable algorithm UCB-f, a generalization of the classic UCB1. Finally, we discuss implications for platform operations and for statistical inference, when bandit algorithms are used. As a byproduct of our result, we resolve an open question of Praharaj and Khamaru (2025).
翻译:当多臂赌博机(MAB)算法在竞争臂之间分配拉动次数时,所产生的分配可能表现出巨大波动。这在现代应用中尤为有害,例如学习增强的平台运营和后赌博机统计推断。受此启发,我们引入了一种新的MAB算法性能度量,称为分配变异性,即各臂拉动次数标准差的最大值(跨所有臂)。我们建立了分配变异性与遗憾(奖励最大化的经典性能度量)之间的基本权衡关系。具体而言,对于任何算法,只要遗憾满足$R_T=o(T)$,则当$T\rightarrow\infty$时,最坏情况遗憾$R_T$与最坏情况分配变异性$S_T$必须满足$R_T \cdot S_T=Ω(T^{\frac{3}{2}})$。这表明任何极小极大遗憾最优算法必然产生最坏情况分配变异性$Θ(T)$(即最大可能尺度);而任何具有次线性最坏情况遗憾的算法必然满足${S}_T= ω(\sqrt{T})$。我们进一步证明该下界本质上是紧的,并且帕累托前沿$R_T \cdot S_T=\tildeΘ(T^{3/2})$上的任意点均可通过一种简单的可调算法UCB-f(经典UCB1的推广)实现。最后,我们讨论了赌博机算法应用于平台运营和统计推断时的启示。作为本研究的副产品,我们解决了Praharaj与Khamaru(2025)提出的一个开放性问题。