In planning and reinforcement learning, the identification of common subgoal structures across problems is important when goals are to be achieved over long horizons. Recently, it has been shown that such structures can be expressed as feature-based rules, called sketches, over a number of classical planning domains. These sketches split problems into subproblems which then become solvable in low polynomial time by a greedy sequence of IW$(k)$ searches. Methods for learning sketches using feature pools and min-SAT solvers have been developed, yet they face two key limitations: scalability and expressivity. In this work, we address these limitations by formulating the problem of learning sketch decompositions as a deep reinforcement learning (DRL) task, where general policies are sought in a modified planning problem where the successor states of a state s are defined as those reachable from s through an IW$(k)$ search. The sketch decompositions obtained through this method are experimentally evaluated across various domains, and problems are regarded as solved by the decomposition when the goal is reached through a greedy sequence of IW$(k)$ searches. While our DRL approach for learning sketch decompositions does not yield interpretable sketches in the form of rules, we demonstrate that the resulting decompositions can often be understood in a crisp manner.
翻译:在规划与强化学习中,当目标需在长时域内达成时,识别跨问题的公共子目标结构至关重要。近期研究表明,此类结构可表达为基于特征的规则(称为草图),适用于多个经典规划领域。这些草图将问题分解为子问题,随后通过贪婪序列的IW$(k)$搜索在低多项式时间内可解。尽管已开发出利用特征池和最小可满足性求解器学习草图的方法,但其面临两个关键局限:可扩展性与表达能力。本研究通过将草图分解学习问题形式化为深度强化学习任务来应对这些局限,其中在修改后的规划问题中寻求通用策略——状态s的后继状态被定义为通过IW$(k)$搜索从s可达的状态。通过该方法获得的草图分解在多个领域进行实验评估,当目标通过贪婪序列的IW$(k)$搜索达成时,即认为分解成功解决了问题。虽然我们用于学习草图分解的深度强化学习方法未能产生规则形式的可解释草图,但我们证明所得分解通常能以清晰的方式被理解。