Air-ground collaborative intelligence is becoming a key approach for next-generation urban intelligent transportation management, where aerial and ground systems work together on perception, communication, and decision-making. However, the lack of a unified multi-modal simulation environment has limited progress in studying cross-domain perception, coordination under communication constraints, and joint decision optimization. To address this gap, we present TranSimHub, a unified simulation platform for air-ground collaborative intelligence. TranSimHub offers synchronized multi-view rendering across RGB, depth, and semantic segmentation modalities, ensuring consistent perception between aerial and ground viewpoints. It also supports information exchange between the two domains and includes a causal scene editor that enables controllable scenario creation and counterfactual analysis under diverse conditions such as different weather, emergency events, and dynamic obstacles. We release TranSimHub as an open-source platform that supports end-to-end research on perception, fusion, and control across realistic air and ground traffic scenes. Our code is available at https://github.com/Traffic-Alpha/TransSimHub.
翻译:空地协同智能正成为下一代城市智能交通管理的关键途径,其中空中与地面系统在感知、通信与决策层面协同工作。然而,由于缺乏统一的多模态仿真环境,跨域感知、通信约束下的协同以及联合决策优化等研究进展受限。为填补这一空白,我们提出了TranSimHub,一个面向空地协同智能的统一仿真平台。TranSimHub提供跨RGB、深度与语义分割模态的同步多视角渲染,确保空中与地面视角间感知的一致性。该平台还支持两域间的信息交换,并包含一个因果场景编辑器,能够在不同天气、紧急事件与动态障碍等多种条件下实现可控场景构建与反事实分析。我们将TranSimHub作为开源平台发布,支持在真实空地交通场景下进行端到端的感知、融合与控制研究。代码发布于https://github.com/Traffic-Alpha/TransSimHub。