We study the Pandora's Box problem in an online learning setting with semi-bandit feedback. In each round, the learner sequentially pays to open up to $n$ boxes with unknown reward distributions, observes rewards upon opening, and decides when to stop. The utility of the learner is the maximum observed reward minus the cumulative cost of opened boxes, and the goal is to minimize regret defined as the gap between the cumulative expected utility and that of the optimal policy. We propose a new algorithm that achieves $\widetilde{O}(\sqrt{nT})$ regret after $T$ rounds, which improves the $\widetilde{O}(n\sqrt{T})$ bound of Agarwal et al. [2024] and matches the known lower bound up to logarithmic factors. To better capture real-life applications, we then extend our results to a natural but challenging contextual linear setting, where each box's expected reward is linear in some known but time-varying $d$-dimensional context and the noise distribution is fixed over time. We design an algorithm that learns both the linear function and the noise distributions, achieving $\widetilde{O}(nd\sqrt{T})$ regret. Finally, we show that our techniques also apply to the online Prophet Inequality problem, where the learner must decide immediately whether or not to accept a revealed reward. In both non-contextual and contextual settings, our approach achieves similar improvements and regret bounds.
翻译:我们研究了具有半赌博反馈的在线学习设定下的潘多拉魔盒问题。在每一轮中,学习者依次支付费用以打开至多$n$个具有未知奖励分布的盒子,打开后观察奖励,并决定何时停止。学习者的效用是观测到的最大奖励减去已打开盒子的累积成本,目标是最小化遗憾——定义为累积期望效用与最优策略的累积期望效用之间的差距。我们提出了一种新算法,在$T$轮后实现了$\widetilde{O}(\sqrt{nT})$的遗憾,这改进了Agarwal等人[2024]的$\widetilde{O}(n\sqrt{T})$界限,并在对数因子内匹配了已知的下界。为了更好地捕捉现实应用,我们随后将结果扩展到一个自然但具有挑战性的上下文线性设定,其中每个盒子的期望奖励是某个已知但时变的$d$维上下文的线性函数,且噪声分布随时间固定。我们设计了一种算法,能够同时学习线性函数和噪声分布,实现了$\widetilde{O}(nd\sqrt{T})$的遗憾。最后,我们证明了我们的技术也适用于在线先知不等式问题,其中学习者必须立即决定是否接受已揭示的奖励。在非上下文和上下文两种设定中,我们的方法均实现了类似的改进和遗憾界限。