Safe learning is essential for deploying learningbased controllers in safety-critical robotic systems, yet existing approaches often enforce multiple safety constraints uniformly or via fixed priority orders, leading to infeasibility and brittle behavior. In practice, safety requirements are heterogeneous and admit only partial priority relations, where some constraints are comparable while others are inherently incomparable. We formalize this setting as poset-structured safety, modeling safety constraints as a partially ordered set and treating safety composition as a structural property of the policy class. Building on this formulation, we propose PoSafeNet, a differentiable neural safety layer that enforces safety via sequential closed-form projection under poset-consistent constraint orderings, enabling adaptive selection or mixing of valid safety executions while preserving priority semantics by construction. Experiments on multi-obstacle navigation, constrained robot manipulation, and vision-based autonomous driving demonstrate improved feasibility, robustness, and scalability over unstructured and differentiable quadratic program-based safety layers.
翻译:安全学习对于在安全关键机器人系统中部署基于学习的控制器至关重要,然而现有方法通常以统一方式或通过固定的优先级顺序强制执行多重安全约束,这往往导致不可行性和脆弱行为。实际上,安全要求具有异质性,且仅允许部分优先级关系,即某些约束是可比较的,而另一些约束本质上是不可比较的。我们将此场景形式化为偏序结构安全,将安全约束建模为偏序集,并将安全组合视为策略类的一种结构特性。基于此形式化,我们提出了PoSafeNet——一种可微分神经安全层,它通过在偏序一致约束排序下的顺序闭式投影来强制执行安全性,从而能够自适应选择或混合有效的安全执行方案,并通过构造方式保持优先级语义。在多障碍物导航、受限机器人操作和基于视觉的自动驾驶任务上的实验表明,相较于非结构化及基于可微分二次规划的安全层,PoSafeNet在可行性、鲁棒性和可扩展性方面均有显著提升。