This article presents novel methods for synthesizing distributionally robust stabilizing neural controllers and certificates for control systems under model uncertainty. A key challenge in designing controllers with stability guarantees for uncertain systems is the accurate determination of and adaptation to shifts in model parametric uncertainty during online deployment. We tackle this with a novel distributionally robust formulation of the Lyapunov derivative chance constraint ensuring a monotonic decrease of the Lyapunov certificate. To avoid the computational complexity involved in dealing with the space of probability measures, we identify a sufficient condition in the form of deterministic convex constraints that ensures the Lyapunov derivative constraint is satisfied. We integrate this condition into a loss function for training a neural network-based controller and show that, for the resulting closed-loop system, the global asymptotic stability of its equilibrium can be certified with high confidence, even with Out-of-Distribution (OoD) model uncertainties. To demonstrate the efficacy and efficiency of the proposed methodology, we compare it with an uncertainty-agnostic baseline approach and several reinforcement learning approaches in two control problems in simulation.
翻译:本文提出了新颖方法,用于综合模型不确定性下控制系统的分布鲁棒稳定神经控制器及其证书。设计具有稳定性保证的不确定系统控制器时,一个关键挑战是在线部署过程中准确确定并适应模型参数不确定性的变化。我们通过一种新颖的李雅普诺夫导数机会约束的分布鲁棒公式来解决这一问题,该约束确保李雅普诺夫证书的单调递减。为避免处理概率测度空间带来的计算复杂性,我们识别了一个确定性凸约束形式的充分条件,该条件可确保李雅普诺夫导数约束得到满足。我们将此条件整合到训练基于神经网络的控制器的损失函数中,并证明:对于所得的闭环系统,其平衡点的全局渐近稳定性可以高置信度地得到保证,即使面对分布外(OoD)模型不确定性。为展示所提方法的有效性和效率,我们在两个仿真控制问题中将其与不确定性无关的基线方法及多种强化学习方法进行了比较。