The scarcity of well-annotated medical datasets requires leveraging transfer learning from broader datasets like ImageNet or pre-trained models like CLIP. Model soups averages multiple fine-tuned models aiming to improve performance on In-Domain (ID) tasks and enhance robustness against Out-of-Distribution (OOD) datasets. However, applying these methods to the medical imaging domain faces challenges and results in suboptimal performance. This is primarily due to differences in error surface characteristics that stem from data complexities such as heterogeneity, domain shift, class imbalance, and distributional shifts between training and testing phases. To address this issue, we propose a hierarchical merging approach that involves local and global aggregation of models at various levels based on models' hyperparameter configurations. Furthermore, to alleviate the need for training a large number of models in the hyperparameter search, we introduce a computationally efficient method using a cyclical learning rate scheduler to produce multiple models for aggregation in the weight space. Our method demonstrates significant improvements over the model souping approach across multiple datasets (around 6% gain in HAM10000 and CheXpert datasets) while maintaining low computational costs for model generation and selection. Moreover, we achieve better results on OOD datasets than model soups. The code is available at https://github.com/BioMedIA-MBZUAI/FissionFusion.
翻译:标注良好的医学数据集稀缺,需要利用来自更广泛数据集(如ImageNet)或预训练模型(如CLIP)的迁移学习。模型融合方法通过平均多个微调模型,旨在提升域内任务的性能并增强对分布外数据集的鲁棒性。然而,将这些方法应用于医学影像领域面临挑战,并导致次优性能。这主要源于误差曲面特性的差异,这些差异由数据的复杂性引起,例如异质性、域偏移、类别不平衡以及训练与测试阶段之间的分布偏移。为解决此问题,我们提出一种分层融合方法,该方法基于模型的超参数配置,在不同层级进行模型的局部与全局聚合。此外,为减轻超参数搜索中需要训练大量模型的需求,我们引入一种计算高效的方法,利用周期性学习率调度器在权重空间中生成多个模型以进行聚合。我们的方法在多个数据集上相比模型融合方法展现出显著改进(在HAM10000和CheXpert数据集上获得约6%的性能提升),同时保持了较低的计算成本。此外,我们在分布外数据集上取得了比模型融合更好的结果。代码发布于 https://github.com/BioMedIA-MBZUAI/FissionFusion。