The AlphaZero framework provides a standard way of combining Monte Carlo planning with prior knowledge provided by a previously trained policy-value neural network. AlphaZero usually assumes that the environment on which the neural network was trained will not change at test time, which constrains its applicability. In this paper, we analyze the problem of deploying AlphaZero agents in potentially changed test environments and demonstrate how the combination of simple modifications to the standard framework can significantly boost performance, even in settings with a low planning budget available. The code is publicly available on GitHub.
翻译:AlphaZero框架提供了一种将蒙特卡洛规划与先前训练的策略-价值神经网络提供的先验知识相结合的标准方法。AlphaZero通常假设训练神经网络的环境在测试时不会发生变化,这限制了其应用范围。本文分析了在可能发生变化的测试环境中部署AlphaZero智能体的问题,并展示了如何通过对标准框架进行简单修改的组合来显著提升性能,即使在规划预算较低的情况下也能实现。代码已在GitHub上公开。