In Extended Reality (XR), complex acoustic environments often overwhelm users, compromising both scene awareness and social engagement due to entangled sound sources. We introduce MoXaRt, a real-time XR system that uses audio-visual cues to separate these sources and enable fine-grained sound interaction. MoXaRt's core is a cascaded architecture that performs coarse, audio-only separation in parallel with visual detection of sources (e.g., faces, instruments). These visual anchors then guide refinement networks to isolate individual sources, separating complex mixes of up to 5 concurrent sources (e.g., 2 voices + 3 instruments) with ~2 second processing latency. We validate MoXaRt through a technical evaluation on a new dataset of 30 one-minute recordings featuring concurrent speech and music, and a 22-participant user study. Empirical results indicate that our system significantly enhances speech intelligibility, yielding a 36.2% (p < 0.01) increase in listening comprehension within adversarial acoustic environments while substantially reducing cognitive load (p < 0.001), thereby paving the way for more perceptive and socially adept XR experiences.
翻译:在扩展现实(XR)环境中,复杂的声学环境常使用户不堪重负,由于声音源相互混杂,既损害了场景感知能力,也影响了社交参与度。我们提出了MoXaRt,一个利用视听线索分离声源并实现细粒度声音交互的实时XR系统。MoXaRt的核心是一个级联架构,该架构并行执行粗粒度的纯音频分离与声源(如人脸、乐器)的视觉检测。这些视觉锚点随后引导精炼网络隔离各个声源,能够以约2秒的处理延迟分离多达5个并发声源(例如2个人声+3种乐器)的复杂混合音频。我们通过在包含并发语音与音乐的30段一分钟录音新数据集上进行技术评估,并结合一项22名参与者的用户研究,对MoXaRt进行了验证。实证结果表明,我们的系统显著提升了语音可懂度,在对抗性声学环境中使听力理解能力提高36.2%(p < 0.01),同时大幅降低认知负荷(p < 0.001),从而为构建感知更敏锐、社交适应性更强的XR体验铺平了道路。