Deep learning has significantly advanced image analysis across diverse domains but often depends on large, annotated datasets for success. Transfer learning addresses this challenge by utilizing pre-trained models to tackle new tasks with limited labeled data. However, discrepancies between source and target domains can hinder effective transfer learning. We introduce BioTune, a novel adaptive fine-tuning technique utilizing evolutionary optimization. BioTune enhances transfer learning by optimally choosing which layers to freeze and adjusting learning rates for unfrozen layers. Through extensive evaluation on nine image classification datasets, spanning natural and specialized domains such as medical imaging, BioTune demonstrates superior accuracy and efficiency over state-of-the-art fine-tuning methods, including AutoRGN and LoRA, highlighting its adaptability to various data characteristics and distribution changes. Additionally, BioTune consistently achieves top performance across four different CNN architectures, underscoring its flexibility. Ablation studies provide valuable insights into the impact of BioTune's key components on overall performance. The source code is available at https://github.com/davilac/BioTune.
翻译:深度学习显著推动了图像分析在多个领域的进展,但其成功往往依赖于大规模标注数据集。迁移学习通过利用预训练模型来处理标注数据有限的新任务,从而应对这一挑战。然而,源域与目标域之间的差异可能阻碍有效的迁移学习。本文提出BioTune,一种利用进化优化的新型自适应微调技术。BioTune通过优化选择冻结哪些层并调整未冻结层的学习率,从而增强迁移学习效果。通过在九个图像分类数据集(涵盖自然图像及医学影像等专业领域)上进行广泛评估,BioTune在准确性和效率上均优于包括AutoRGN和LoRA在内的先进微调方法,突显了其适应不同数据特征与分布变化的能力。此外,BioTune在四种不同的CNN架构上均能稳定取得最优性能,证明了其灵活性。消融研究为理解BioTune关键组件对整体性能的影响提供了重要见解。源代码发布于https://github.com/davilac/BioTune。