Large language models have revolutionized natural language processing by leveraging self-supervised pretraining on vast textual data. Inspired by this success, researchers have investigated complicated speech tokenization methods to discretize continuous speech signals so that language modeling techniques can be applied to speech data. However, existing approaches either model semantic (content) tokens, potentially losing acoustic information, or model acoustic tokens, risking the loss of semantic (content) information. Having multiple token types also complicates the architecture and requires additional pretraining. Here we show that discretizing mel-filterbank channels into discrete intensity bins produces a simple representation (dMel), that performs better than other existing speech tokenization methods. Using an LM-style transformer architecture for speech-text modeling, we comprehensively evaluate different speech tokenization methods on speech recognition (ASR) and speech synthesis (TTS). Our results demonstrate the effectiveness of dMel in achieving high performance on both tasks within a unified framework, paving the way for efficient and effective joint modeling of speech and text.
翻译:大型语言模型通过利用海量文本数据的自监督预训练,彻底改变了自然语言处理领域。受此成功启发,研究者们探索了复杂的语音标记化方法,将连续语音信号离散化,从而使语言建模技术能够应用于语音数据。然而,现有方法要么建模语义(内容)标记,可能丢失声学信息;要么建模声学标记,存在丢失语义(内容)信息的风险。使用多种标记类型还会使架构复杂化,并需要额外的预训练。本文证明,将梅尔滤波器组通道离散化为离散强度区间可产生一种简单表示(dMel),其性能优于其他现有语音标记化方法。通过采用LM风格的Transformer架构进行语音-文本建模,我们在语音识别(ASR)和语音合成(TTS)任务上全面评估了不同的语音标记化方法。实验结果表明,dMel能够在统一框架内实现两项任务的高性能,为高效且有效的语音-文本联合建模开辟了新途径。