Multimodal Large Language Models (MLLMs) have demonstrated capabilities in audio understanding, but current evaluations may obscure fundamental weaknesses in relational reasoning. We introduce the Music Understanding and Structural Evaluation (MUSE) Benchmark, an open-source resource with 10 tasks designed to probe fundamental music perception skills. We evaluate four SOTA models (Gemini Pro and Flash, Qwen2.5-Omni, and Audio-Flamingo 3) against a large human baseline (N=200). Our results reveal a wide variance in SOTA capabilities and a persistent gap with human experts. While Gemini Pro succeeds on basic perception, Qwen and Audio Flamingo 3 perform at or near chance, exposing severe perceptual deficits. Furthermore, we find Chain-of-Thought (CoT) prompting provides inconsistent, often detrimental results. Our work provides a critical tool for evaluating invariant musical representations and driving development of more robust AI systems.
翻译:多模态大语言模型(MLLMs)已在音频理解方面展现出能力,但现有评估可能掩盖了其在关系推理方面的根本性缺陷。我们提出了音乐理解与结构评估(MUSE)基准,这是一个包含10项任务的开源资源,旨在探究基础音乐感知技能。我们评估了四种最先进模型(Gemini Pro与Flash、Qwen2.5-Omni及Audio-Flamingo 3),并以大规模人类基线(N=200)作为参照。研究结果揭示了当前最先进模型能力的巨大差异及其与人类专家之间持续存在的差距。尽管Gemini Pro在基础感知任务上表现成功,但Qwen与Audio Flamingo 3的表现处于或接近随机水平,暴露出严重的感知缺陷。此外,我们发现思维链(CoT)提示策略会产生不一致且往往有害的结果。本工作为评估不变性音乐表征及推动开发更稳健的人工智能系统提供了关键工具。