This paper presents a new algorithm for neural contextual bandits (CBs) that addresses the challenge of delayed reward feedback, where the reward for a chosen action is revealed after a random, unknown delay. This scenario is common in applications such as online recommendation systems and clinical trials, where reward feedback is delayed because the outcomes or results of a user's actions (such as recommendations or treatment responses) take time to manifest and be measured. The proposed algorithm, called Delayed NeuralUCB, uses an upper confidence bound (UCB)-based exploration strategy. Under the assumption of independent and identically distributed sub-exponential reward delays, we derive an upper bound on the cumulative regret over a T-length horizon. We further consider a variant of the algorithm, called Delayed NeuralTS, that uses Thompson Sampling-based exploration. Numerical experiments on real-world datasets, such as MNIST and Mushroom, along with comparisons to benchmark approaches, demonstrate that the proposed algorithms effectively manage varying delays and are well-suited for complex real-world scenarios.
翻译:本文提出了一种针对神经上下文赌博机的新算法,旨在解决延迟奖励反馈的挑战,即所选动作的奖励会在随机且未知的延迟后显现。这种场景常见于在线推荐系统和临床试验等应用中,其中奖励反馈之所以延迟,是因为用户行为(如推荐或治疗响应)的结果需要时间显现和测量。所提出的算法称为 Delayed NeuralUCB,采用基于上置信界的探索策略。在独立同分布的亚指数奖励延迟假设下,我们推导了算法在长度为 T 的时间范围内的累积遗憾上界。我们进一步考虑了该算法的一个变体,称为 Delayed NeuralTS,它采用基于汤普森采样的探索策略。在 MNIST 和 Mushroom 等真实数据集上的数值实验,以及与基准方法的比较表明,所提出的算法能有效处理不同程度的延迟,并非常适用于复杂的现实场景。