The certification of autonomous systems is an important concern in science and industry. The KI-LOK project explores new methods for certifying and safely integrating AI components into autonomous trains. We pursued a two-layered approach: (1) ensuring the safety of the steering system by formal analysis using the B method, and (2) improving the reliability of the perception system with a runtime certificate checker. This work links both strategies within a demonstrator that runs simulations on the formal model, controlled by the real AI output and the real certificate checker. The demonstrator is integrated into the validation tool ProB. This enables runtime monitoring, runtime verification, and statistical validation of formal safety properties using a formal B model. Consequently, one can detect and analyse potential vulnerabilities and weaknesses of the AI and the certificate checker. We apply these techniques to a signal detection case study and present our findings.
翻译:自主系统的认证是科学与工业领域的重要议题。KI-LOK项目致力于探索将人工智能组件认证并安全集成至自主列车系统的新方法。我们采用双层方法:(1)通过基于B方法的形式化分析确保转向系统的安全性,(2)借助运行时认证检查器提升感知系统的可靠性。本研究将这两种策略整合于一个演示系统中,该系统在形式化模型上运行仿真,并由真实人工智能输出与真实认证检查器共同控制。该演示系统已集成至验证工具ProB中,从而能够利用形式化B模型实现运行时监控、运行时验证以及形式化安全属性的统计验证。基于此,可以检测并分析人工智能系统与认证检查器潜在的脆弱性与缺陷。我们将这些技术应用于信号检测案例研究,并展示了相关发现。