Medical image fusion integrates complementary information from multiple imaging modalities to improve clinical interpretation. However, existing deep learningbased methods, including recent spatial-frequency frameworks such as AdaFuse and ASFE-Fusion, often suffer from a fundamental trade-off between global statistical similaritymeasured by correlation coefficient (CC) and mutual information (MI)and local structural fidelity. This paper proposes W-DUALMINE, a reliability-weighted dual-expert fusion framework designed to explicitly resolve this trade-off through architectural constraints and a theoretically grounded loss design. The proposed method introduces dense reliability maps for adaptive modality weighting, a dual-expert fusion strategy combining a global-context spatial expert and a wavelet-domain frequency expert, and a soft gradient-based arbitration mechanism. Furthermore, we employ a residual-to-average fusion paradigm that guarantees the preservation of global correlation while enhancing local details. Extensive experiments on CT-MRI, PET-MRI, and SPECT-MRI datasets demonstrate that W-DUALMINE consistently outperforms AdaFuse and ASFE-Fusion in CC and MI metrics while
翻译:医学图像融合旨在整合来自多种成像模态的互补信息,以提升临床判读效果。然而,现有的深度学习方法,包括近期提出的空间-频率框架如AdaFuse和ASFE-Fusion,通常面临一个根本性的权衡:即由相关系数(CC)和互信息(MI)度量的全局统计相似性与局部结构保真度之间的权衡。本文提出W-DUALMINE,一种可靠性加权的双专家融合框架,旨在通过架构约束和理论驱动的损失设计,明确解决这一权衡。该方法引入了用于自适应模态加权的密集可靠性图、结合全局上下文空间专家与小波域频率专家的双专家融合策略,以及一种基于软梯度的仲裁机制。此外,我们采用残差-平均融合范式,在增强局部细节的同时,确保全局相关性得以保持。在CT-MRI、PET-MRI和SPECT-MRI数据集上进行的大量实验表明,W-DUALMINE在CC和MI指标上持续优于AdaFuse和ASFE-Fusion,同时