In this paper, we explore the application of Large Language Models (LLMs) to the pre-training of music. While the prevalent use of MIDI in music modeling is well-established, our findings suggest that LLMs are inherently more compatible with ABC Notation, which aligns more closely with their design and strengths, thereby enhancing the model's performance in musical composition. To address the challenges associated with misaligned measures from different tracks during generation, we propose the development of a Synchronized Multi-Track ABC Notation (SMT-ABC Notation), which aims to preserve coherence across multiple musical tracks. Our contributions include a series of models capable of handling up to 8192 tokens, covering 90% of the symbolic music data in our training set. Furthermore, we explore the implications of the Symbolic Music Scaling Law (SMS Law) on model performance. The results indicate a promising direction for future research in music generation, offering extensive resources for community-led research through our open-source contributions.
翻译:本文探索了大型语言模型在音乐预训练中的应用。尽管MIDI在音乐建模中的普遍应用已得到广泛认可,但我们的研究发现,LLM本质上与ABC记谱法更为兼容,后者更契合其设计理念与优势,从而提升了模型在音乐创作中的表现。为解决生成过程中不同音轨小节错位带来的挑战,我们提出了一种同步多轨ABC记谱法,旨在保持多音轨间的连贯性。我们的贡献包括一系列能处理最多8192个标记的模型,覆盖了训练集中90%的符号音乐数据。此外,我们还探究了符号音乐缩放定律对模型性能的影响。研究结果为音乐生成的未来探索提供了具有前景的方向,并通过开源贡献为社区主导的研究提供了丰富资源。