Contraction theory is an analytical tool to study differential dynamics of a non-autonomous (i.e., time-varying) nonlinear system under a contraction metric defined with a uniformly positive definite matrix, the existence of which results in a necessary and sufficient characterization of incremental exponential stability of multiple solution trajectories with respect to each other. By using a squared differential length as a Lyapunov-like function, its nonlinear stability analysis boils down to finding a suitable contraction metric that satisfies a stability condition expressed as a linear matrix inequality, indicating that many parallels can be drawn between well-known linear systems theory and contraction theory for nonlinear systems. Furthermore, contraction theory takes advantage of a superior robustness property of exponential stability used in conjunction with the comparison lemma. This yields much-needed safety and stability guarantees for neural network-based control and estimation schemes, without resorting to a more involved method of using uniform asymptotic stability for input-to-state stability. Such distinctive features permit the systematic construction of a contraction metric via convex optimization, thereby obtaining an explicit exponential bound on the distance between a time-varying target trajectory and solution trajectories perturbed externally due to disturbances and learning errors. The objective of this paper is, therefore, to present a tutorial overview of contraction theory and its advantages in nonlinear stability analysis of deterministic and stochastic systems, with an emphasis on deriving formal robustness and stability guarantees for various learning-based and data-driven automatic control methods. In particular, we provide a detailed review of techniques for finding contraction metrics and associated control and estimation laws using deep neural networks.
翻译:收缩理论是一种分析工具,用于研究在由一致正定矩阵定义的收缩度量下,非自治(即时变)非线性系统的微分动态。该度量的存在为多条解轨迹之间的增量指数稳定性提供了必要且充分的表征。通过将平方微分长度用作类李雅普诺夫函数,其非线性稳定性分析可归结为寻找满足以线性矩阵不等式表示的稳定性条件的合适收缩度量,这表明在众所周知的线性系统理论与非线性系统的收缩理论之间可以建立诸多对应关系。此外,收缩理论利用了指数稳定性的优越鲁棒性特性,并结合比较引理使用。这为基于神经网络的控制与估计方案提供了亟需的安全性与稳定性保证,而无需诉诸使用一致渐近稳定性进行输入到状态稳定性分析的更复杂方法。这些显著特征允许通过凸优化系统性地构造收缩度量,从而获得时变目标轨迹与因扰动和学习误差而受外部扰动的解轨迹之间距离的显式指数界。因此,本文旨在对收缩理论及其在确定性与随机系统的非线性稳定性分析中的优势进行教程性综述,重点在于为各种基于学习和数据驱动的自动控制方法推导形式化的鲁棒性与稳定性保证。特别地,我们详细回顾了使用深度神经网络寻找收缩度量及相关控制与估计律的技术。