In recent years, foundation models have become very popular due to their exceptional performance, mainly in natural language (NLP) tasks where they were first introduced. These models usually consist of hundreds of millions, or even billions, of parameters, making them resource-intensive during training and in production systems, leading to increased costs. This paper focuses on the reduction of a foundation's model size when applied to music information retrieval (MIR) tasks. Our research combines the Branchformer architecture with SummaryMixing, which were first applied in speech recognition, along with a random quantization process. To facilitate reproducibility, we conduct pre-training on publicly available datasets, complemented by a proprietary dataset comparable in scale to other private datasets reported in the literature. We ensure robust evaluation by using a framework consisting of a variety of downstream MIR tasks. Our results show that our architecture achieves competitive performance when compared with other state-of-the-art models that use multi-head self-attention, while reducing the model size from 8.5% up to 12.3%.
翻译:近年来,基础模型因其卓越性能而广受欢迎,主要应用于其最初引入的自然语言处理任务。这些模型通常包含数亿甚至数十亿参数,导致训练和生产系统资源消耗巨大,成本显著增加。本文聚焦于在音乐信息检索任务中缩减基础模型规模。我们的研究将最初应用于语音识别的Branchformer架构与SummaryMixing相结合,并引入随机量化过程。为促进可复现性,我们在公开数据集上进行预训练,并辅以规模与文献报道的其他私有数据集相当的专有数据集。我们通过包含多种下游音乐信息检索任务的评估框架确保稳健评估。实验结果表明:与采用多头自注意力机制的其他先进模型相比,我们的架构在将模型规模缩减8.5%至12.3%的同时,仍能保持具有竞争力的性能。