We study the stochastic bandit problem with ReLU neural network structure. We show that a $\tilde{O}(\sqrt{T})$ regret guarantee is achievable by considering bandits with one-layer ReLU neural networks; to the best of our knowledge, our work is the first to achieve such a guarantee. In this specific setting, we propose an OFU-ReLU algorithm that can achieve this upper bound. The algorithm first explores randomly until it reaches a linear regime, and then implements a UCB-type linear bandit algorithm to balance exploration and exploitation. Our key insight is that we can exploit the piecewise linear structure of ReLU activations and convert the problem into a linear bandit in a transformed feature space, once we learn the parameters of ReLU relatively accurately during the exploration stage. To remove dependence on model parameters, we design an OFU-ReLU+ algorithm based on a batching strategy, which can provide the same theoretical guarantee.
翻译:我们研究了具有ReLU神经网络结构的随机赌博机问题。我们证明,通过考虑单层ReLU神经网络的赌博机,可以实现$\tilde{O}(\sqrt{T})$的遗憾保证;据我们所知,我们的工作首次实现了这一保证。针对这一特定设定,我们提出了一种能够达到这一上界的OFU-ReLU算法。该算法首先进行随机探索,直至进入线性区间,随后实施UCB型线性赌博机算法以平衡探索与利用。我们的核心洞察在于:在探索阶段相对准确地学习到ReLU参数后,可以利用ReLU激活函数的分段线性结构,将问题转化为变换特征空间中的线性赌博机。为消除对模型参数的依赖,我们基于批处理策略设计了OFU-ReLU+算法,该算法可提供相同的理论保证。