Recent advances in speech foundation models (SFMs) have enabled the direct processing of spoken language from raw audio, bypassing intermediate textual representations. This capability allows SFMs to be exposed to, and potentially respond to, rich paralinguistic variations embedded in the input speech signal. One under-explored dimension of paralinguistic variation is voice quality, encompassing phonation types such as creaky and breathy voice. These phonation types are known to influence how listeners infer affective state, stance and social meaning in speech. Existing benchmarks for speech understanding largely rely on multiple-choice question answering (MCQA) formats, which are prone to failure and therefore unreliable in capturing the nuanced ways paralinguistic features influence model behaviour. In this paper, we probe SFMs through open-ended generation tasks and speech emotion recognition, evaluating whether model behaviours are consistent across different phonation inputs. We introduce a new parallel dataset featuring synthesized modifications to voice quality, designed to evaluate SFM responses to creaky and breathy voice. Our work provides the first examination of SFM sensitivity to these particular non-lexical aspects of speech perception.
翻译:语音基础模型(SFMs)的最新进展使得能够直接从原始音频处理口语,绕过了中间文本表示。这一能力使SFMs能够接触并可能响应输入语音信号中丰富的副语言变异。一个尚未充分探索的副语言变异维度是语音质量,包括如嘎吱声和呼吸声等发声类型。已知这些发声类型会影响听者对语音中情感状态、立场和社会意义的推断。现有的语音理解基准主要依赖于多项选择题(MCQA)格式,这种格式容易失败,因此在捕捉副语言特征影响模型行为的微妙方式方面不可靠。在本文中,我们通过开放式生成任务和语音情感识别来探究SFMs,评估模型行为在不同发声输入下是否一致。我们引入了一个新的平行数据集,包含对语音质量的合成修改,旨在评估SFMs对嘎吱声和呼吸声的响应。我们的工作首次检验了SFMs对这些特定非词汇性语音感知方面的敏感性。