Recently, deep reinforcement learning (DRL) has achieved promising results in solving online 3D Bin Packing Problems (3D-BPP). However, these DRL-based policies may perform poorly on new instances due to distribution shift. Besides generalization, we also consider adaptation, completely overlooked by previous work, which aims at rapidly finetuning these policies to a new test distribution. To tackle both generalization and adaptation issues, we propose Adaptive Selection After Pruning (ASAP), which decomposes a solver's decision-making into two policies, one for pruning and one for selection. The role of the pruning policy is to remove inherently bad actions, which allows the selection policy to choose among the remaining most valuable actions. To learn these policies, we propose a training scheme based on a meta-learning phase of both policies followed by a finetuning phase of the sole selection policy to rapidly adapt it to a test distribution. Our experiments demonstrate that ASAP exhibits excellent generalization and adaptation capabilities on in-distribution and out-of-distribution instances under both discrete and continuous setup.
翻译:近期,深度强化学习在求解在线三维装箱问题中取得了显著成果。然而,由于分布偏移,这些基于深度强化学习的策略在新实例上可能表现不佳。除泛化性外,本文还关注了先前研究完全忽视的适应性问题,即如何将这些策略快速微调至新的测试分布。为同时解决泛化与适应问题,我们提出剪枝后自适应选择方法,该方法将求解器的决策过程分解为剪枝策略与选择策略。剪枝策略的作用是剔除固有劣质动作,使选择策略能在剩余的高价值动作中进行选择。为学习这两种策略,我们提出一种训练方案:先对双策略进行元学习阶段训练,再对单一选择策略进行微调阶段训练,以使其快速适应测试分布。实验表明,ASAP在离散与连续设置下,对分布内与分布外实例均展现出优异的泛化与适应能力。