Acoustic scene perception involves describing the type of sounds, their timing, their direction and distance, as well as their loudness and reverberation. While audio language models excel in sound recognition, single-channel input fundamentally limits spatial understanding. This work presents Sci-Phi, a spatial audio large language model with dual spatial and spectral encoders that estimates a complete parameter set for all sound sources and the surrounding environment. Learning from over 4,000 hours of synthetic first-order Ambisonics recordings including metadata, Sci-Phi enumerates and describes up to four directional sound sources in one pass, alongside non-directional background sounds and room characteristics. We evaluate the model with a permutation-invariant protocol and 15 metrics covering content, location, timing, loudness, and reverberation, and analyze its robustness across source counts, signal-to-noise ratios, reverberation levels, and challenging mixtures of acoustically, spatially, or temporally similar sources. Notably, Sci-Phi generalizes to real room impulse responses with only minor performance degradation. Overall, this work establishes the first audio LLM capable of full spatial-scene description, with strong potential for real-world deployment. Demo: https://sci-phi-audio.github.io/demo
翻译:声学场景感知涉及描述声音的类型、时序、方向与距离,以及响度和混响特性。虽然音频语言模型在声音识别方面表现出色,但单通道输入从根本上限制了空间理解能力。本研究提出了Sci-Phi,一种配备双空间与频谱编码器的空间音频大语言模型,能够估计所有声源及周围环境的完整参数集。通过从超过4000小时包含元数据的合成一阶Ambisonics录音中学习,Sci-Phi可单次枚举并描述多达四个方向性声源,同时涵盖非方向性背景声音及房间特性。我们采用排列不变性评估协议和涵盖内容、位置、时序、响度及混响的15项指标对模型进行评估,并分析了其在声源数量、信噪比、混响水平以及声学/空间/时序相似声源混合场景下的鲁棒性。值得注意的是,Sci-Phi能够泛化至真实房间脉冲响应,仅出现轻微性能下降。总体而言,本研究首次实现了具备完整空间场景描述能力的音频大语言模型,具有强大的实际应用潜力。演示页面:https://sci-phi-audio.github.io/demo