Multi-agent systems are increasingly equipped with heterogeneous multimodal sensors, enabling richer perception but introducing modality-specific and agent-dependent uncertainty. Existing multi-agent collaboration frameworks typically reason at the agent level, assume homogeneous sensing, and handle uncertainty implicitly, limiting robustness under sensor corruption. We propose Active Asymmetric Multi-Agent Multimodal Learning under Uncertainty (A2MAML), a principled approach for uncertainty-aware, modality-level collaboration. A2MAML models each modality-specific feature as a stochastic estimate with uncertainty prediction, actively selects reliable agent-modality pairs, and aggregates information via Bayesian inverse-variance weighting. This formulation enables fine-grained, modality-level fusion, supports asymmetric modality availability, and provides a principled mechanism to suppress corrupted or noisy modalities. Extensive experiments on connected autonomous driving scenarios for collaborative accident detection demonstrate that A2MAML consistently outperforms both single-agent and collaborative baselines, achieving up to 18.7% higher accident detection rate.
翻译:多智能体系统日益配备异构多模态传感器,这增强了感知能力,但也引入了模态特定和智能体依赖的不确定性。现有的多智能体协作框架通常在智能体层面进行推理,假设同质感知,并隐式处理不确定性,限制了传感器损坏情况下的鲁棒性。我们提出了不确定性下的主动非对称多智能体多模态学习,这是一种用于不确定性感知、模态级协作的原则性方法。A2MAML将每个模态特定特征建模为带有不确定性预测的随机估计,主动选择可靠的智能体-模态对,并通过贝叶斯逆方差加权聚合信息。该公式实现了细粒度的模态级融合,支持非对称模态可用性,并提供了抑制损坏或噪声模态的原则性机制。在互联自动驾驶场景中针对协作事故检测的大量实验表明,A2MAML始终优于单智能体和协作基线方法,实现了高达18.7%的事故检测率提升。