Speech encoders pretrained through self-supervised learning (SSL) have demonstrated remarkable performance in various downstream tasks, including Spoken Language Understanding (SLU) and Automatic Speech Recognition (ASR). For instance, fine-tuning SSL models for such tasks has shown significant potential, leading to improvements in the SOTA performance across challenging datasets. In contrast to existing research, this paper contributes by comparing the effectiveness of SSL approaches in the context of (i) the low-resource spoken Tunisian Arabic dialect and (ii) its combination with a low-resource SLU and ASR scenario, where only a few semantic annotations are available for fine-tuning. We conduct experiments using many SSL speech encoders on the TARIC-SLU dataset. We use speech encoders that were pre-trained on either monolingual or multilingual speech data. Some of them have also been refined without in-domain nor Tunisian data through multimodal supervised teacher-student paradigm. This study yields numerous significant findings that we are discussing in this paper.
翻译:通过自监督学习预训练的语音编码器已在多种下游任务中展现出卓越性能,包括口语理解与自动语音识别。例如,针对此类任务对SSL模型进行微调已显示出显著潜力,在多类挑战性数据集上推动了SOTA性能的提升。与现有研究相比,本文的创新点在于:在(1)低资源突尼斯阿拉伯语口语方言环境,以及(2)该方言与低资源SLU及ASR场景相结合的背景下,系统比较了SSL方法的有效性——在此场景中仅有少量语义标注可用于微调。我们在TARIC-SLU数据集上使用多种SSL语音编码器开展实验。这些语音编码器或基于单语语音数据预训练,或基于多语种语音数据预训练。其中部分编码器还通过多模态监督师生范式进行了精调,该过程未使用领域内数据及突尼斯方言数据。本研究获得了多项重要发现,将在文中展开讨论。