Human perception has the unique ability to focus on specific events in a mixture of signals--a challenging task for existing non-intrusive assessment methods. In this work, we introduce semi-intrusive assessment that emulates human attention by framing audio assessment as a text-prediction task with audio-text inputs. To this end, we extend the multi-modal PENGI model through instruction fine-tuning for MOS and SNR estimation. For MOS, our approach achieves absolute Pearson correlation gains of 0.06 and 0.20 over the re-trained MOSRA model and the pre-trained PAM model, respectively. We further propose a novel SNR estimator that can focus on a specific audio source in a mixture, outperforming a random baseline and the fixed-prompt counterpart. Our findings suggest that semi-intrusive assessment can effectively capture human-like selective listening capabilities. Samples are available at https://jozefcoldenhoff.github.io/semi-intrusive-assessment.
翻译:人类感知具备聚焦于混合信号中特定事件的独特能力——这对现有非侵入式评估方法构成挑战。本研究提出半侵入式评估方法,通过将音频评估构建为音频-文本输入条件下的文本预测任务,以模拟人类注意力机制。为此,我们通过指令微调扩展了多模态PENGI模型,用于MOS与SNR估计。在MOS评估中,相较于重新训练的MOSRA模型与预训练的PAM模型,本方法分别实现了0.06和0.20的绝对皮尔逊相关系数提升。我们进一步提出一种新型SNR估计器,能够聚焦于混合音频中的特定声源,其性能优于随机基线模型及固定提示词对照模型。研究结果表明,半侵入式评估能有效捕捉类人类的选择性听觉能力。音频样本详见https://jozefcoldenhoff.github.io/semi-intrusive-assessment。