Many works have developed no-regret algorithms for contextual bandits with function approximation, where the mean reward function over context-action pairs belongs to a function class. Although there are many approaches to this problem, one that has gained in importance is the use of algorithms based on the optimism principle such as optimistic least squares. It can be shown the regret of this algorithm scales as square root of the product of the eluder dimension (a statistical measure of the complexity of the function class), the logarithm of the function class size and the time horizon. Unfortunately, even if the variance of the measurement noise of the rewards at each time is changing and is very small, the regret of the optimistic least squares algorithm scales with square root of the time horizon. In this work we are the first to develop algorithms that satisfy regret bounds of scaling not with the square root of the time horizon, but the square root of the sum of the measurement variances in the setting of contextual bandits with function approximation when the variances are unknown. These bounds generalize existing techniques for deriving second order bounds in contextual linear problems.
翻译:许多研究已为基于函数逼近的情境赌博机开发了无遗憾算法,其中情境-动作对的平均奖励函数属于某个函数类。尽管该问题存在多种解决思路,基于乐观原则的算法(如乐观最小二乘法)的重要性日益凸显。可以证明,该算法的遗憾度与函数类的埃尔鲁德维度(衡量函数类复杂性的统计量)、函数类大小的对数以及时间跨度的乘积的平方根成比例。然而,即使各时刻奖励观测噪声的方差不断变化且数值极小,乐观最小二乘算法的遗憾度仍会随时间跨度的平方根增长。本研究首次在函数逼近的情境赌博机设置中,针对未知方差的情形,提出了遗憾度不随时间跨度的平方根、而随观测噪声方差之和的平方根增长的算法。这些上界推广了现有在线性情境问题中推导二阶上界的技术。