This paper introduces an extension to the arbitration graph framework designed to enhance the safety and robustness of autonomous systems in complex, dynamic environments. Building on the flexibility and scalability of arbitration graphs, the proposed method incorporates a verification step and structured fallback layers in the decision-making process. This ensures that only verified and safe commands are executed while enabling graceful degradation in the presence of unexpected faults or bugs. The approach is demonstrated using a Pac-Man simulation and further validated in the context of autonomous driving, where it shows significant reductions in accident risk and improvements in overall system safety. The bottom-up design of arbitration graphs allows for an incremental integration of new behavior components. The extension presented in this work enables the integration of experimental or immature behavior components while maintaining system safety by clearly and precisely defining the conditions under which behaviors are considered safe. The proposed method is implemented as a ready to use header-only C++ library, published under the MIT License. Together with the Pac-Man demo, it is available at github.com/KIT-MRT/arbitration_graphs.
翻译:本文提出了一种对仲裁图框架的扩展,旨在增强自主系统在复杂动态环境中的安全性与稳健性。该方法基于仲裁图的灵活性和可扩展性,在决策过程中引入了验证步骤和结构化后备层。这确保了只有经过验证的安全指令才会被执行,同时在出现意外故障或错误时能够实现优雅降级。我们通过Pac-Man仿真演示了该方法,并在自动驾驶场景中进一步验证了其有效性,结果表明该方法能显著降低事故风险并提升整体系统安全性。仲裁图的自底向上设计允许新的行为组件以增量方式集成。本文提出的扩展使得实验性或未成熟的行为组件能够被集成,同时通过清晰、精确地定义行为被视为安全的条件来维持系统安全性。所提出的方法已实现为一个开箱即用的仅头文件C++库,并以MIT许可证发布。该库与Pac-Man演示程序一同发布于github.com/KIT-MRT/arbitration_graphs。