We propose a novel framework for contextual multi-armed bandits based on tree ensembles. Our framework integrates two widely used bandit methods, Upper Confidence Bound and Thompson Sampling, for both standard and combinatorial settings. We demonstrate the effectiveness of our framework via several experimental studies, employing both XGBoost and random forest, two popular tree ensemble methods. Compared to state-of-the-art methods based on decision trees and neural networks, our methods exhibit superior performance in terms of both regret minimization and computational runtime, when applied to benchmark datasets and the real-world application of navigation over road networks.
翻译:我们提出了一种基于树集成的情境多臂赌博机新框架。该框架整合了两种广泛使用的赌博机方法——上置信界算法和汤普森采样,适用于标准情境和组合情境。我们通过多项实验研究证明了该框架的有效性,这些实验采用了两种流行的树集成方法:XGBoost和随机森林。与基于决策树和神经网络的最先进方法相比,当应用于基准数据集和道路网络导航这一实际应用时,我们的方法在遗憾最小化和计算运行时间方面均表现出更优的性能。