Real-world tasks involve nuanced combinations of goal and safety specifications. In high dimensions, the challenge is exacerbated: formal automata become cumbersome, and the combination of sparse rewards tends to require laborious tuning. In this work, we consider the innate structure of the Bellman Value as a means to naturally organize the problem for improved automatic performance. Namely, we prove the Bellman Value for a complex task defined in temporal logic can be decomposed into a graph of Bellman Values, connected by a set of well-known Bellman equations (BEs): the Reach-Avoid BE, the Avoid BE, and a novel type, the Reach-Avoid-Loop BE. To solve the Value and optimal policy, we propose VDPPO, which embeds the decomposed Value graph into a two-layer neural net, bootstrapping the implicit dependencies. We conduct a variety of simulated and hardware experiments to test our method on complex, high-dimensional tasks involving heterogeneous teams and nonlinear dynamics. Ultimately, we find this approach greatly improves performance over existing baselines, balancing safety and liveness automatically.
翻译:现实世界中的任务涉及目标与安全规范的微妙组合。在高维环境中,这一挑战更为严峻:形式化自动机变得繁琐,稀疏奖励的组合往往需要大量人工调参。本文提出利用贝尔曼值的固有结构作为自然组织问题的手段,以提升自动控制性能。具体而言,我们证明了用时序逻辑定义的复杂任务对应的贝尔曼值可分解为贝尔曼值图,该图通过一组经典贝尔曼方程相互连接:可达-规避贝尔曼方程、规避贝尔曼方程以及本文新提出的可达-规避-循环贝尔曼方程。为求解最优值函数与最优策略,我们提出VDPPO算法,将分解后的值图嵌入双层神经网络,通过自举法处理隐式依赖关系。我们在多种仿真与硬件实验中测试了该方法在涉及异构团队和非线性动力学的复杂高维任务上的表现。最终发现,相较于现有基线方法,本方法在自动平衡安全性与活性方面显著提升了性能。