Large Audio Language Models (LALMs) excel at semantic and paralinguistic tasks, yet their ability to perceive the fundamental physical attributes of audio such as pitch, loudness, and spatial location remains under-explored. To bridge this gap, we introduce SonicBench, a psychophysically grounded benchmark that systematically evaluates 12 core physical attributes across five perceptual dimensions. Unlike previous datasets, SonicBench uses a controllable generation toolbox to construct stimuli for two complementary paradigms: recognition (absolute judgment) and comparison (relative judgment). This design allows us to probe not only sensory precision but also relational reasoning capabilities, a domain where humans typically exhibit greater proficiency. Our evaluation reveals a substantial deficiency in LALMs' foundational auditory understanding; most models perform near random guessing and, contrary to human patterns, fail to show the expected advantage on comparison tasks. Furthermore, explicit reasoning yields minimal gains. However, our linear probing analysis demonstrates crucially that frozen audio encoders do successfully capture these physical cues (accuracy at least 60%), suggesting that the primary bottleneck lies in the alignment and decoding stages, where models fail to leverage the sensory signals they have already captured.
翻译:大型音频语言模型(LALMs)在语义和副语言任务上表现出色,但其感知音频基本物理属性(如音高、响度和空间位置)的能力仍未得到充分探索。为填补这一空白,我们引入了SonicBench——一个基于心理物理学的基准测试,系统评估了五个感知维度上的12个核心物理属性。与以往数据集不同,SonicBench采用可控生成工具箱构建刺激材料,支持两种互补范式:识别(绝对判断)与比较(相对判断)。该设计不仅能探究感知精度,还能评估关系推理能力——人类通常在该领域表现出更强优势。评估结果显示,LALMs在基础听觉理解方面存在显著缺陷:多数模型表现接近随机猜测,且与人类模式相反,未能在比较任务中展现预期优势。此外,显式推理带来的提升微乎其微。然而,我们的线性探测分析关键性地表明,冻结的音频编码器确实成功捕获了这些物理线索(准确率≥60%),这暗示主要瓶颈在于对齐和解码阶段——模型未能有效利用已捕获的感知信号。