Complex systems are increasingly explored through simulation-driven engineering workflows that combine physics-based and empirical models with optimization and analytics. Despite their power, these workflows face two central obstacles: (1) high computational cost, since accurate exploration requires many expensive simulator runs; and (2) limited transparency and reliability when decisions rely on opaque blackbox components. We propose a workflow that addresses both challenges by training lightweight emulators on compact designs of experiments that (i) provide fast, low-latency approximations of expensive simulators, (ii) enable rigorous uncertainty quantification, and (iii) are adapted for global and local Explainable Artificial Intelligence (XAI) analyses. This workflow unifies every simulation-based complex-system analysis tool, ranging from engineering design to agent-based models for socio-environmental understanding. In this paper, we proposea comparative methodology and practical recommendations for using surrogate-based explainability tools within the proposed workflow. The methodology supports continuous and categorical inputs, combines global-effect and uncertainty analyses with local attribution, and evaluates the consistency of explanations across surrogate models, thereby diagnosing surrogate adequacy and guiding further data collection or model refinement. We demonstrate the approach on two contrasting case studies: a multidisciplinary design analysis of a hybrid-electric aircraft and an agent-based model of urban segregation. Results show that the surrogate model and XAI coupling enables large-scale exploration in seconds, uncovers nonlinear interactions and emergent behaviors, identifies key design and policy levers, and signals regions where surrogates require more data or alternative architectures.
翻译:复杂系统正日益通过结合基于物理的模型与经验模型、优化及分析的仿真驱动工程工作流进行探索。尽管功能强大,这些工作流面临两大核心障碍:(1) 高计算成本——精确探索需要大量昂贵的仿真器运行;(2) 当决策依赖于不透明的黑箱组件时,透明度和可靠性受限。我们提出一种解决这两大挑战的工作流,通过在紧凑的实验设计上训练轻量级仿真器来实现:(i) 为昂贵仿真器提供快速、低延迟的近似;(ii) 实现严格的不确定性量化;(iii) 适配全局与局部可解释人工智能(XAI)分析。该工作流统一了从工程设计到社会环境理解中基于主体的模型等所有基于仿真的复杂系统分析工具。本文提出了一种比较方法论及实践建议,用于在所述工作流中使用基于代理的可解释性工具。该方法论支持连续与分类输入,将全局效应和不确定性分析与局部归因相结合,并评估不同代理模型间解释的一致性,从而诊断代理模型的充分性并指导进一步的数据收集或模型改进。我们在两个对比案例研究中验证了该方法:混合动力飞机的多学科设计分析与城市隔离的基于主体模型。结果表明,代理模型与XAI的耦合能在数秒内实现大规模探索,揭示非线性相互作用和涌现行为,识别关键设计与政策杠杆,并指示代理模型需要更多数据或替代架构的区域。