Stability certificates play a critical role in ensuring the safety and reliability of robotic systems. However, deriving these certificates for complex, unknown systems has traditionally required explicit knowledge of system dynamics, often making it a daunting task. This work introduces a novel framework that learns a Lyapunov function directly from trajectory data, enabling the certification of stability for autonomous systems without needing detailed system models. By parameterizing the Lyapunov candidate using a neural network and ensuring positive definiteness through Cholesky factorization, our approach automatically identifies whether the system is stable under the given trajectory. To address the challenges posed by noisy, real-world data, we allow for controlled violations of the stability condition, focusing on maintaining high confidence in the stability certification process. Our results demonstrate that this framework can provide data-driven stability guarantees, offering a robust method for certifying the safety of robotic systems in dynamic, real-world environments. This approach works without access to the internal control algorithms, making it applicable even in situations where system behavior is opaque or proprietary. The tool for learning the stability proof is open-sourced by this research: https://github.com/HansOersted/stability.
翻译:稳定性证书对于确保机器人系统的安全性与可靠性起着至关重要的作用。然而,为复杂、未知的系统推导此类证书传统上需要明确的系统动力学知识,这通常使其成为一项艰巨的任务。本文提出了一种新颖的框架,能够直接从轨迹数据中学习李雅普诺夫函数,从而无需详细的系统模型即可为自主系统提供稳定性认证。通过使用神经网络参数化李雅普诺夫候选函数,并借助Cholesky分解确保其正定性,我们的方法能够自动判断系统在给定轨迹下是否稳定。为了应对噪声、真实世界数据带来的挑战,我们允许稳定性条件在受控范围内被违反,重点在于保持稳定性认证过程的高置信度。我们的结果表明,该框架能够提供数据驱动的稳定性保证,为在动态的真实世界环境中认证机器人系统的安全性提供了一种鲁棒的方法。此方法无需访问内部控制算法,因此即使系统行为不透明或具有专有性,该方法依然适用。学习稳定性证明的工具已在本研究中开源:https://github.com/HansOersted/stability。