The lack of labeled data is a common challenge in speech classification tasks, particularly those requiring extensive subjective assessment, such as cognitive state classification. In this work, we propose a Semi-Supervised Learning (SSL) framework, introducing a novel multi-view pseudo-labeling method that leverages both acoustic and linguistic characteristics to select the most confident data for training the classification model. Acoustically, unlabeled data are compared to labeled data using the Frechet audio distance, calculated from embeddings generated by multiple audio encoders. Linguistically, large language models are prompted to revise automatic speech recognition transcriptions and predict labels based on our proposed task-specific knowledge. High-confidence data are identified when pseudo-labels from both sources align, while mismatches are treated as low-confidence data. A bimodal classifier is then trained to iteratively label the low-confidence data until a predefined criterion is met. We evaluate our SSL framework on emotion recognition and dementia detection tasks. Experimental results demonstrate that our method achieves competitive performance compared to fully supervised learning using only 30% of the labeled data and significantly outperforms two selected baselines.
翻译:在语音分类任务中,尤其是那些需要大量主观评估的任务(如认知状态分类),标注数据的缺乏是一个普遍挑战。本研究提出一种半监督学习框架,引入了一种新颖的多视角伪标记方法,该方法利用声学和语言特征来选择置信度最高的数据用于训练分类模型。在声学层面,通过计算由多个音频编码器生成的嵌入向量之间的弗雷歇音频距离,将未标注数据与已标注数据进行比较。在语言层面,我们提示大型语言模型对自动语音识别转录文本进行修订,并基于我们提出的任务特定知识预测标签。当来自两个来源的伪标签一致时,数据被识别为高置信度数据,而不匹配的数据则被视为低置信度数据。随后训练一个双模态分类器,迭代地对低置信度数据进行标注,直至满足预设标准。我们在情感识别和痴呆症检测任务上评估了该半监督学习框架。实验结果表明,与仅使用30%标注数据的全监督学习相比,我们的方法取得了具有竞争力的性能,并且显著优于两个选定的基线方法。