We study the online resource allocation problem in which at each round, a budget $B$ must be allocated across $K$ arms under censored feedback. An arm yields a reward if and only if two conditions are satisfied: (i) the arm is activated according to an arm-specific Bernoulli random variable with unknown parameter, and (ii) the allocated budget exceeds a random threshold drawn from a parametric distribution with unknown parameter. Over $T$ rounds, the learner must jointly estimate the unknown parameters and allocate the budget so as to maximize cumulative reward facing the exploration--exploitation trade-off. We prove an information-theoretic regret lower bound $Ω(T^{1/3})$, demonstrating the intrinsic difficulty of the problem. We then propose RA-UCB, an optimistic algorithm that leverages non-trivial parameter estimation and confidence bounds. When the budget $B$ is known at the beginning of each round, RA-UCB achieves a regret of order $\widetilde{\mathcal{O}}(\sqrt{T})$, and even $\mathcal{O}(\mathrm{poly}\text{-}\log T)$ under stronger assumptions. As for unknown, round dependent budget, we introduce MG-UCB, which allows within-round switching and infinitesimal allocations, and matches the regret guarantees of RA-UCB. We then validate our theoretical results through experiments on real-world datasets.
翻译:本文研究在线资源分配问题:在每一轮中,需将预算$B$分配给$K$个臂,且仅能获得截断反馈。当且仅当满足以下两个条件时,臂才会产生奖励:(i) 根据未知参数的臂特定伯努利随机变量激活该臂;(ii) 分配的预算超过从具有未知参数的参数化分布中抽取的随机阈值。在$T$轮中,学习者必须联合估计未知参数并分配预算,以在探索-利用的权衡下最大化累积奖励。我们证明了信息论意义上的遗憾下界为$Ω(T^{1/3})$,揭示了该问题的内在困难性。随后,我们提出RA-UCB算法,这是一种利用非平凡参数估计与置信界的乐观算法。当每轮初始已知预算$B$时,RA-UCB可实现$\widetilde{\mathcal{O}}(\sqrt{T})$量级的遗憾,在更强假设下甚至可达$\mathcal{O}(\mathrm{poly}\text{-}\log T)$量级。针对未知且随轮次变化的预算,我们提出MG-UCB算法,该算法允许轮内切换与无穷小分配,并匹配RA-UCB的遗憾保证。最后,我们在真实数据集上通过实验验证了理论结果。