Multimodal learning has significantly enhanced machine learning performance but still faces numerous challenges and limitations. Imbalanced multimodal learning is one of the problems extensively studied in recent works and is typically mitigated by modulating the learning of each modality. However, we find that these methods typically hinder the dominant modality's learning to promote weaker modalities, which affects overall multimodal performance. We analyze the cause of this issue and highlight a commonly overlooked problem: optimization bias within networks. To address this, we propose Adaptive Intra-Network Modulation (AIM) to improve balanced modality learning. AIM accounts for differences in optimization state across parameters and depths within the network during modulation, achieving balanced multimodal learning without hindering either dominant or weak modalities for the first time. Specifically, AIM decouples the dominant modality's under-optimized parameters into Auxiliary Blocks and encourages reliance on these performance-degraded blocks for joint training with weaker modalities. This approach effectively prevents suppression of weaker modalities while enabling targeted optimization of under-optimized parameters to improve the dominant modality. Additionally, AIM assesses modality imbalance level across network depths and adaptively adjusts modulation strength at each depth. Experimental results demonstrate that AIM outperforms state-of-the-art imbalanced modality learning methods across multiple benchmarks and exhibits strong generalizability across different backbones, fusion strategies, and optimizers.
翻译:多模态学习显著提升了机器学习性能,但仍面临诸多挑战与局限。不平衡多模态学习是近年研究中广泛探讨的问题之一,通常通过对各模态的学习过程进行调制来缓解。然而,我们发现现有方法往往通过抑制主导模态的学习来促进弱势模态,这影响了多模态的整体性能。我们分析了该问题的成因,并指出了一个常被忽视的关键:网络内部的优化偏差。为此,我们提出自适应网络内调制(AIM)方法以改善模态间的均衡学习。AIM在调制过程中综合考虑网络内参数与深度的优化状态差异,首次实现了既不抑制主导模态也不妨碍弱势模态的均衡多模态学习。具体而言,AIM将主导模态中未充分优化的参数解耦至辅助模块,并促使网络在弱势模态联合训练时依赖这些性能降级的模块。该方法在有效避免弱势模态被压制的同时,能针对性优化未充分优化的参数以提升主导模态性能。此外,AIM评估不同网络深度处的模态失衡程度,并自适应调整各深度的调制强度。实验结果表明,AIM在多个基准测试中优于当前最先进的不平衡模态学习方法,且在不同骨干网络、融合策略与优化器上均展现出强大的泛化能力。