Prompting has become a practical method for utilizing pre-trained language models (LMs). This approach offers several advantages. It allows an LM to adapt to new tasks with minimal training and parameter updates, thus achieving efficiency in both storage and computation. Additionally, prompting modifies only the LM's inputs and harnesses the generative capabilities of language models to address various downstream tasks in a unified manner. This significantly reduces the need for human labor in designing task-specific models. These advantages become even more evident as the number of tasks served by the LM scales up. Motivated by the strengths of prompting, we are the first to explore the potential of prompting speech LMs in the domain of speech processing. Recently, there has been a growing interest in converting speech into discrete units for language modeling. Our pioneer research demonstrates that these quantized speech units are highly versatile within our unified prompting framework. Not only can they serve as class labels, but they also contain rich phonetic information that can be re-synthesized back into speech signals for speech generation tasks. Specifically, we reformulate speech processing tasks into speech-to-unit generation tasks. As a result, we can seamlessly integrate tasks such as speech classification, sequence generation, and speech generation within a single, unified prompting framework. The experiment results show that the prompting method can achieve competitive performance compared to the strong fine-tuning method based on self-supervised learning models with a similar number of trainable parameters. The prompting method also shows promising results in the few-shot setting. Moreover, with the advanced speech LMs coming into the stage, the proposed prompting framework attains great potential.
翻译:提示已成为利用预训练语言模型的一种实用方法。该方法具有多重优势:它使语言模型能够以最少的训练和参数更新适应新任务,从而在存储和计算上实现高效性;同时,提示仅修改语言模型的输入,并利用语言模型的生成能力以统一方式处理各种下游任务,这显著减少了设计任务专用模型所需的人力成本。随着语言模型所服务的任务数量增加,这些优势变得尤为明显。受提示方法优势的启发,我们首次探索了在语音处理领域中应用提示语音语言模型的潜力。近年来,将语音转换为离散单元以进行语言建模的研究日益受到关注。我们的开创性研究表明,这些量化语音单元在我们统一的提示框架中具有高度通用性:它们不仅能作为类别标签,还包含丰富的语音学信息,可重新合成为语音信号以用于语音生成任务。具体而言,我们将语音处理任务重新定义为语音到单元的生成任务,从而能够在单一统一的提示框架中无缝整合语音分类、序列生成和语音生成等任务。实验结果表明,在可训练参数数量相近的情况下,提示方法相较于基于自监督学习模型的强效微调方法能够取得具有竞争力的性能。提示方法在少样本学习场景中也展现出良好效果。此外,随着先进语音语言模型的出现,所提出的提示框架展现出巨大潜力。