Sequential decision-making algorithms such as multi-armed bandits can find optimal personalized decisions, but are notoriously sample-hungry. In personalized medicine, for example, training a bandit from scratch for every patient is typically infeasible, as the number of trials required is much larger than the number of decision points for a single patient. To combat this, latent bandits offer rapid exploration and personalization beyond what context variables alone can offer, provided that a latent variable model of problem instances can be learned consistently. However, existing works give no guidance as to how such a model can be found. In this work, we propose an identifiable latent bandit framework that leads to optimal decision-making with a shorter exploration time than classical bandits by learning from historical records of decisions and outcomes. Our method is based on nonlinear independent component analysis that provably identifies representations from observational data sufficient to infer optimal actions in new bandit instances. We verify this strategy in simulated and semi-synthetic environments, showing substantial improvement over online and offline learning baselines when identifying conditions are satisfied.
翻译:多臂赌博机等序列决策算法能够找到最优的个性化决策方案,但众所周知其需要大量样本。例如在个性化医疗中,为每位患者从头训练赌博机通常不可行,因为所需试验次数远超过单个患者的决策点数量。为解决这一问题,潜在赌博机在问题实例的潜在变量模型能够被一致学习的前提下,可提供超越单纯上下文变量的快速探索与个性化能力。然而现有研究并未提供如何寻找此类模型的指导。本研究提出一种可识别潜在赌博机框架,通过学习历史决策与结果记录,以比经典赌博机更短的探索时间实现最优决策。该方法基于非线性独立成分分析,可证明地从观测数据中识别出足以推断新赌博机实例最优动作的表征。我们在模拟与半合成环境中验证该策略,结果表明在满足识别条件时,该方法相较在线与离线学习基线有显著提升。