We present a rigorous, human-in-the-loop evaluation framework for assessing the performance of AI agents on the task of Air Traffic Control, grounded in a regulator-certified simulator-based curriculum used for training and testing real-world trainee controllers. By leveraging legally regulated assessments and involving expert human instructors in the evaluation process, our framework enables a more authentic and domain-accurate measurement of AI performance. This work addresses a critical gap in the existing literature: the frequent misalignment between academic representations of Air Traffic Control and the complexities of the actual operational environment. It also lays the foundations for effective future human-machine teaming paradigms by aligning machine performance with human assessment targets.
翻译:我们提出了一种严谨的人机协同评估框架,用于评估AI智能体在空管任务中的性能。该框架基于监管机构认证的模拟器训练体系——该体系同样用于真实世界见习管制员的培训与考核。通过采用法定监管的评估标准,并引入资深人类教员参与评估流程,我们的框架能够实现对AI性能更真实、更贴合领域特性的度量。本工作弥补了现有研究中的一个关键缺陷:学术界对空中交通管制的表征与实际运行环境的复杂性之间普遍存在的脱节现象。同时,通过将机器性能与人类评估目标对齐,本研究为未来高效的人机协同范式奠定了理论基础。