Detecting medical conditions from speech acoustics is fundamentally a weakly-supervised learning problem: a single, often noisy, session-level label must be linked to nuanced patterns within a long, complex audio recording. This task is further hampered by severe data scarcity and the subjective nature of clinical annotations. While semi-supervised learning (SSL) offers a viable path to leverage unlabeled data, existing audio methods often fail to address the core challenge that pathological traits are not uniformly expressed in a patient's speech. We propose a novel, audio-only SSL framework that explicitly models this hierarchy by jointly learning from frame-level, segment-level, and session-level representations within unsegmented clinical dialogues. Our end-to-end approach dynamically aggregates these multi-granularity features and generates high-quality pseudo-labels to efficiently utilize unlabeled data. Extensive experiments show the framework is model-agnostic, robust across languages and conditions, and highly data-efficient-achieving, for instance, 90\% of fully-supervised performance using only 11 labeled samples. This work provides a principled approach to learning from weak, far-end supervision in medical speech analysis.
翻译:从语音声学特征中检测医疗状况本质上是一个弱监督学习问题:单个通常带有噪声的会话级标签必须与冗长复杂的音频记录中的细微模式相关联。该任务还受到数据严重稀缺和临床标注主观性的进一步阻碍。虽然半监督学习为利用未标注数据提供了可行路径,但现有的音频方法往往未能解决核心挑战——病理特征在患者语音中并非均匀表达。我们提出了一种新颖的纯音频半监督学习框架,通过联合学习未切分临床对话中的帧级、片段级和会话级表征,显式建模这种层级结构。我们的端到端方法动态聚合这些多粒度特征并生成高质量伪标签,以高效利用未标注数据。大量实验表明该框架具有模型无关性,在跨语言和跨疾病条件下均表现稳健,且具备极高的数据效率——例如仅使用11个标注样本即可达到全监督性能的90%。这项工作为医学语音分析中的弱监督远端监督学习提供了原理性方法。