While significant progress has been made in designing algorithms that minimize regret in online decision-making, real-world scenarios often introduce additional complexities, perhaps the most challenging of which is missing outcomes. Overlooking this aspect or simply assuming random missingness invariably leads to biased estimates of the rewards and may result in linear regret. Despite the practical relevance of this challenge, no rigorous methodology currently exists for systematically handling missingness, especially when the missingness mechanism is not random. In this paper, we address this gap in the context of multi-armed bandits (MAB) with missing outcomes by analyzing the impact of different missingness mechanisms on achievable regret bounds. We introduce algorithms that account for missingness under both missing at random (MAR) and missing not at random (MNAR) models. Through both analytical and simulation studies, we demonstrate the drastic improvements in decision-making by accounting for missingness in these settings.
翻译:尽管在最小化在线决策遗憾的算法设计方面已取得显著进展,但现实场景往往引入额外的复杂性,其中最具挑战性的或许是结果缺失。忽视这一方面或简单地假设随机缺失,必然导致奖励估计的偏差,并可能引发线性遗憾。尽管这一挑战具有实际重要性,但目前尚无严谨的方法论来系统处理缺失问题,尤其是在缺失机制非随机的情况下。本文通过分析不同缺失机制对可达到遗憾界的影响,针对具有缺失结果的多臂老虎机问题填补了这一空白。我们提出了在随机缺失与非随机缺失模型下均能处理缺失问题的算法。通过理论分析与仿真研究,我们证明了在这些场景中考虑缺失机制能显著提升决策性能。