Self-Supervised Learning is vastly used to efficiently represent speech for Spoken Language Understanding, gradually replacing conventional approaches. Meanwhile, textual SSL models are proposed to encode language-agnostic semantics. SAMU-XLSR framework employed this semantic information to enrich multilingual speech representations. A recent study investigated SAMU-XLSR in-domain semantic enrichment by specializing it on downstream transcriptions, leading to state-of-the-art results on a challenging SLU task. This study's interest lies in the loss of multilingual performances and lack of specific-semantics training induced by such specialization in close languages without any SLU implication. We also consider SAMU-XLSR's loss of initial cross-lingual abilities due to a separate SLU fine-tuning. Therefore, this paper proposes a dual task learning approach to improve SAMU-XLSR semantic enrichment while considering distant languages for multilingual and language portability experiments.
翻译:自监督学习被广泛用于高效表示口语理解中的语音,逐渐取代传统方法。同时,文本自监督学习模型被提出用于编码语言无关的语义。SAMU-XLSR框架利用这种语义信息来丰富多语言语音表示。最近的一项研究通过在下游转录文本上对SAMU-XLSR进行领域内语义专门化,在一项具有挑战性的口语理解任务上取得了最先进的结果。本研究的关注点在于:在无口语理解任务关联的相近语言中,这种专门化会导致多语言性能下降以及缺乏特定语义训练。我们还考虑了SAMU-XLSR因独立的口语理解微调而丧失初始跨语言能力的问题。因此,本文提出一种双任务学习方法,旨在改进SAMU-XLSR的语义增强,同时考虑在远距离语言上进行多语言及语言可移植性实验。