Multimodal remote sensing classification often suffers from missing modalities caused by sensor failures and environmental interference, leading to severe performance degradation. In this work, we rethink missing-modality learning from a conditional computation perspective and investigate whether Mixture-of-Experts (MoE) models can inherently adapt to diverse modality-missing scenarios. We first conduct a systematic study of representative MoE paradigms under various missing-modality settings, revealing both their potential and limitations. Building on these insights, we propose a Missing-aware Mixture-of-LoRAs (MaMOL), a parameter-efficient MoE framework that unifies multiple modality-missing cases within a single model. MaMOL introduces a dual-routing mechanism to decouple modality-invariant shared experts and modality-aware dynamic experts, enabling automatic expert activation conditioned on available modalities. Extensive experiments on multiple remote sensing benchmarks demonstrate that MaMOL significantly improves robustness and generalization under diverse missing-modality scenarios with minimal computational overhead. Transfer experiments on natural image datasets further validate its scalability and cross-domain applicability.
翻译:多模态遥感分类常因传感器故障和环境干扰导致模态缺失,从而引发严重的性能退化。本研究从条件计算的角度重新审视缺失模态学习问题,探究混合专家模型是否能够本质适应多样化的模态缺失场景。我们首先系统研究了多种缺失模态设置下具有代表性的混合专家范式,揭示了其潜力与局限性。基于这些发现,我们提出缺失感知的LoRA混合模型——一种参数高效的混合专家框架,能够将多种模态缺失情况统一在单一模型中。该模型通过双路由机制解耦模态不变的共享专家与模态感知的动态专家,实现基于可用模态的专家自动激活。在多组遥感基准数据集上的大量实验表明,该模型能以极小的计算开销显著提升多样化模态缺失场景下的鲁棒性与泛化能力。在自然图像数据集上的迁移实验进一步验证了其可扩展性与跨领域适用性。