Music understanding is a complex task that often requires reasoning over both structural and semantic elements of audio. We introduce BASS, designed to evaluate music understanding and reasoning in audio language models across four broad categories: structural segmentation, lyric transcription, musicological analysis, and artist collaboration. BASS comprises 2658 questions spanning 12 tasks, 1993 unique songs and covering over 138 hours of music from a wide range of genres and tracks, crafted to assess musicological knowledge and reasoning in real-world scenarios. We evaluate 14 open-source and frontier multimodal LMs, finding that even state-of-the-art models struggle on higher-level reasoning tasks such as structural segmentation and artist collaboration, while performing best on lyric transcription. Our analysis reveals that current models leverage linguistic priors effectively but remain limited in reasoning over musical structure, vocal, and musicological attributes. BASS provides an evaluation framework with widespread applications in music recommendation and search and has the potential to guide the development of audio LMs.
翻译:音乐理解是一项复杂的任务,通常需要对音频的结构与语义元素进行推理。我们提出了BASS,旨在从四个广泛类别评估音频语言模型的音乐理解与推理能力:结构分割、歌词转录、音乐学分析和艺术家合作。BASS包含2658个问题,涵盖12项任务、1993首独特歌曲,涉及超过138小时的跨流派与曲目音乐,专门设计用于评估真实场景中的音乐学知识与推理能力。我们对14个开源及前沿多模态语言模型进行了评估,发现即使是当前最先进的模型在结构分割和艺术家合作等高层级推理任务上仍表现困难,而在歌词转录任务上表现最佳。分析表明,现有模型能有效利用语言先验,但在音乐结构、人声及音乐学属性上的推理能力仍然有限。BASS提供了一个评估框架,在音乐推荐与搜索领域具有广泛的应用前景,并有望指导音频语言模型的未来发展。