Music foundation models possess impressive music generation capabilities. When people compose music, they may infuse their understanding of music into their work, by using notes and intervals to craft melodies, chords to build progressions, and tempo to create a rhythmic feel. To what extent is this true of music generation models? More specifically, are fundamental Western music theory concepts observable within the "inner workings" of these models? Recent work proposed leveraging latent audio representations from music generation models towards music information retrieval tasks (e.g. genre classification, emotion recognition), which suggests that high-level musical characteristics are encoded within these models. However, probing individual music theory concepts (e.g. tempo, pitch class, chord quality) remains under-explored. Thus, we introduce SynTheory, a synthetic MIDI and audio music theory dataset, consisting of tempos, time signatures, notes, intervals, scales, chords, and chord progressions concepts. We then propose a framework to probe for these music theory concepts in music foundation models (Jukebox and MusicGen) and assess how strongly they encode these concepts within their internal representations. Our findings suggest that music theory concepts are discernible within foundation models and that the degree to which they are detectable varies by model size and layer.
翻译:音乐基础模型具备令人印象深刻的音乐生成能力。当人们创作音乐时,可能会将他们对音乐的理解融入作品中,例如运用音符和音程来构建旋律,利用和弦来构建进行,以及通过速度来营造节奏感。对于音乐生成模型而言,这一说法在多大程度上成立?更具体地说,基础的西方音乐理论概念是否能在这些模型的“内部工作机制”中被观察到?最近的研究提出利用音乐生成模型的潜在音频表示来完成音乐信息检索任务(例如流派分类、情感识别),这表明高级音乐特征被编码在这些模型中。然而,对单个音乐理论概念(例如速度、音高类别、和弦性质)的探查仍显不足。因此,我们引入了SynTheory——一个合成的MIDI与音频音乐理论数据集,包含速度、拍号、音符、音程、音阶、和弦以及和弦进行等概念。随后,我们提出了一个框架,用于探查音乐基础模型(Jukebox和MusicGen)中的这些音乐理论概念,并评估这些概念在其内部表示中的编码强度。我们的研究结果表明,音乐理论概念在基础模型中是可辨识的,并且其可检测程度因模型规模和层级而异。