Fine-tuning pre-trained models for downstream tasks is a widely adopted technique known for its adaptability and reliability across various domains. Despite its conceptual simplicity, fine-tuning entails several troublesome engineering choices, such as selecting hyperparameters and determining checkpoints from an optimization trajectory. To tackle the difficulty of choosing the best model, one effective solution is model fusion, which combines multiple models in a parameter space. However, we observe a large discrepancy between loss and metric landscapes during the fine-tuning of pre-trained language models. Building on this observation, we introduce a novel model fusion technique that optimizes both the desired metric and loss through multi-objective Bayesian optimization. In addition, to effectively select hyperparameters, we establish a two-stage procedure by integrating Bayesian optimization processes into our framework. Experiments across various downstream tasks show considerable performance improvements using our Bayesian optimization-guided method.
翻译:在下游任务中对预训练模型进行微调是一种被广泛采用的技术,以其跨领域的适应性和可靠性而著称。尽管概念简单,微调过程涉及若干棘手的工程选择,例如选择超参数以及从优化轨迹中确定检查点。为应对选择最佳模型的困难,一种有效的解决方案是模型融合,即在参数空间中组合多个模型。然而,我们观察到在预训练语言模型的微调过程中,损失函数与评估指标之间存在显著差异。基于这一观察,我们提出了一种新颖的模型融合技术,通过多目标贝叶斯优化同时优化目标指标与损失函数。此外,为了有效选择超参数,我们通过将贝叶斯优化过程整合到框架中,建立了一个两阶段流程。在多个下游任务上的实验表明,使用我们这种贝叶斯优化引导的方法能带来显著的性能提升。