Reliable simulations are critical for analyzing and understanding complex systems, but their accuracy depends on correct input data. Incorrect inputs such as invalid or out-of-range values, missing data, and format inconsistencies can cause simulation crashes or unnoticed result distortions, ultimately undermining the validity of the conclusions. This paper presents a methodology for verifying the validity of input data in simulations, a process we term model input verification (MIV). We implement this approach in FabGuard, a toolset that uses established data schema and validation tools for the specific needs of simulation modeling. We introduce a formalism for categorizing MIV patterns and offer a streamlined verification pipeline that integrates into existing simulation workflows. FabGuard's applicability is demonstrated across three diverse domains: conflict-driven migration, disaster evacuation, and disease spread models. We also explore the use of Large Language Models (LLMs) for automating constraint generation and inference. In a case study with a migration simulation, LLMs not only correctly inferred 22 out of 23 developer-defined constraints, but also identified errors in existing constraints and proposed new, valid constraints. Our evaluation demonstrates that MIV is feasible on large datasets, with FabGuard efficiently processing 12,000 input files in 140 seconds and maintaining consistent performance across varying file sizes.
翻译:可靠仿真对于分析和理解复杂系统至关重要,但其准确性取决于正确的输入数据。无效或超范围数值、数据缺失及格式不一致等错误输入可能导致仿真崩溃或未被察觉的结果失真,最终损害结论的有效性。本文提出一种验证仿真输入数据有效性的方法,该过程我们称为模型输入验证(MIV)。我们将该方法实现在FabGuard工具集中,该工具集针对仿真建模的特殊需求,采用成熟的数据模式与验证工具。我们引入形式化方法对MIV模式进行分类,并提供可集成至现有仿真工作流的精简验证流程。通过冲突驱动的人口迁移、灾害疏散和疾病传播模型这三个不同领域,展示了FabGuard的适用性。我们还探索了使用大语言模型(LLMs)实现约束条件自动生成与推断。在人口迁移仿真的案例研究中,LLMs不仅正确推断出23个开发者定义约束中的22个,还识别出现有约束中的错误并提出了新的有效约束。评估结果表明MIV在大规模数据集上具有可行性:FabGuard能在140秒内高效处理12,000个输入文件,且在不同文件规模下保持稳定的性能表现。