Many works have developed no-regret algorithms for contextual bandits with function approximation, where the mean reward function over context-action pairs belongs to a function class. Although there are many approaches to this problem, one that has gained in importance is the use of algorithms based on the optimism principle such as optimistic least squares. It can be shown the regret of this algorithm scales as square root of the product of the eluder dimension (a statistical measure of the complexity of the function class), the logarithm of the function class size and the time horizon. Unfortunately, even if the variance of the measurement noise of the rewards at each time is changing and is very small, the regret of the optimistic least squares algorithm scales with square root of the time horizon. In this work we are the first to develop algorithms that satisfy regret bounds of scaling not with the square root of the time horizon, but the square root of the sum of the measurement variances in the setting of contextual bandits with function approximation when the variances are unknown. These bounds generalize existing techniques for deriving second order bounds in contextual linear problems.
翻译:许多研究已经为具有函数逼近的上下文赌博机开发了无遗憾算法,其中上下文-动作对上的平均奖励函数属于某个函数类。尽管针对该问题存在多种方法,但基于乐观原则的算法(如乐观最小二乘法)的重要性日益凸显。可以证明,该算法的遗憾缩放与函数类的埃尔鲁德维度(函数类复杂度的统计度量)、函数类大小的对数以及时间跨度的乘积的平方根成正比。遗憾的是,即使每个时刻奖励测量噪声的方差发生变化且非常小,乐观最小二乘算法的遗憾仍会随时间跨度的平方根缩放。在本研究中,我们首次在具有函数逼近的上下文赌博机设置中,当方差未知时,开发出遗憾界不再随时间跨度的平方根缩放,而是随测量方差之和的平方根缩放的算法。这些界推广了在上下文线性问题中推导二阶界的现有技术。