More music foundation models are recently being released, promising a general, mostly task independent encoding of musical information. Common ways of adapting music foundation models to downstream tasks are probing and fine-tuning. These common transfer learning approaches, however, face challenges. Probing might lead to suboptimal performance because the pre-trained weights are frozen, while fine-tuning is computationally expensive and is prone to overfitting. Our work investigates the use of parameter-efficient transfer learning (PETL) for music foundation models which integrates the advantage of probing and fine-tuning. We introduce three types of PETL methods: adapter-based methods, prompt-based methods, and reparameterization-based methods. These methods train only a small number of parameters, and therefore do not require significant computational resources. Results show that PETL methods outperform both probing and fine-tuning on music auto-tagging. On key detection and tempo estimation, they achieve similar results as fine-tuning with significantly less training cost. However, the usefulness of the current generation of foundation model on key and tempo tasks is questioned by the similar results achieved by training a small model from scratch. Code available at https://github.com/suncerock/peft-music/
翻译:近年来,越来越多的音乐基础模型被发布,它们承诺提供一种通用且基本独立于任务的音乐信息编码方式。将音乐基础模型适配到下游任务的常用方法是探测和微调。然而,这些常见的迁移学习方法面临挑战。探测可能导致次优性能,因为预训练权重被冻结;而微调则计算成本高昂且容易过拟合。我们的工作研究了参数高效迁移学习在音乐基础模型中的应用,它结合了探测和微调的优势。我们介绍了三种类型的PETL方法:基于适配器的方法、基于提示的方法和基于重参数化的方法。这些方法仅训练少量参数,因此不需要大量的计算资源。结果表明,在音乐自动标注任务上,PETL方法的表现优于探测和微调。在调性检测和速度估计任务上,它们取得了与微调相似的结果,但训练成本显著降低。然而,通过从头训练一个小型模型也能取得相似结果,这引发了人们对当前一代基础模型在调性和速度任务上实用性的质疑。代码可在 https://github.com/suncerock/peft-music/ 获取。