Learning-based approaches for controlling safety-critical systems are rapidly growing in popularity; thus, it is important to assure their performance and safety. Hamilton-Jacobi (HJ) reachability analysis is a popular formal verification tool for providing such guarantees, since it can handle general nonlinear system dynamics, bounded adversarial system disturbances, and state and input constraints. However, its computational and memory complexity scales exponentially with the state dimension, making it intractable for large-scale systems. To overcome this challenge, neural approaches, such as DeepReach, have been used to synthesize reachable tubes and safety controllers for high-dimensional systems. However, verifying these neural reachable tubes remains challenging. In this work, we propose two verification methods, based on robust scenario optimization and conformal prediction, to provide probabilistic safety guarantees for neural reachable tubes. Our methods allow a direct trade-off between resilience to outlier errors in the neural tube, which are inevitable in a learning-based approach, and the strength of the probabilistic safety guarantee. Furthermore, we show that split conformal prediction, a widely used method in the machine learning community for uncertainty quantification, reduces to a scenario-based approach, making the two methods equivalent not only for verification of neural reachable tubes but also more generally. To our knowledge, our proof is the first in the literature to show a strong relationship between conformal prediction and scenario optimization. Finally, we propose an outlier-adjusted verification approach that uses the error distribution in neural reachable tubes to recover greater safe volumes. We demonstrate the efficacy of the proposed approaches for the high-dimensional problems of multi-vehicle collision avoidance and rocket landing with no-go zones.
翻译:基于学习的安全关键系统控制方法正迅速普及,因此保证其性能与安全性至关重要。汉密尔顿-雅可比(HJ)可达性分析是提供此类保证的常用形式化验证工具,因其能处理一般非线性系统动力学、有界对抗性系统扰动以及状态与输入约束。然而,其计算与内存复杂度随状态维数呈指数增长,使其难以应用于大规模系统。为克服这一挑战,DeepReach等神经方法已被用于合成高维系统的可达管与安全控制器。然而,验证这些神经可达管仍具挑战性。本文提出两种基于鲁棒情景优化与共形预测的验证方法,为神经可达管提供概率安全保证。我们的方法允许在神经管中难以避免的离群误差鲁棒性与概率安全保证强度之间直接权衡。此外,我们证明机器学习领域广泛用于不确定性量化的分裂共形预测可简化为基于情景的方法,这不仅使两种方法在神经可达管验证中等价,在更广泛场景下亦然。据我们所知,本文首次在文献中证明共形预测与情景优化之间的强关联性。最后,我们提出离群调整验证方法,利用神经可达管中的误差分布恢复更大的安全体积。我们通过多车避碰与禁飞区火箭着陆等高维问题验证了所提方法的有效性。