Large language models have revolutionized natural language processing through self-supervised pretraining on massive datasets. Inspired by this success, researchers have explored adapting these methods to speech by discretizing continuous audio into tokens using neural audio codecs. However, existing approaches face limitations, including high bitrates, the loss of either semantic or acoustic information, and the reliance on multi-codebook designs when trying to capture both, which increases architectural complexity for downstream tasks. To address these challenges, we introduce FocalCodec, an efficient low-bitrate codec based on focal modulation that utilizes a single binary codebook to compress speech between 0.16 and 0.65 kbps. FocalCodec delivers competitive performance in speech resynthesis and voice conversion at lower bitrates than the current state-of-the-art, while effectively handling multilingual speech and noisy environments. Evaluation on downstream tasks shows that FocalCodec successfully preserves sufficient semantic and acoustic information, while also being well-suited for generative modeling. Demo samples and code are available at https://lucadellalib.github.io/focalcodec-web/.
翻译:大型语言模型通过在海量数据集上进行自监督预训练,彻底改变了自然语言处理领域。受此成功的启发,研究人员探索将这些方法应用于语音处理,即使用神经音频编解码器将连续音频离散化为标记。然而,现有方法存在一些局限性,包括高比特率、语义或声学信息的丢失,以及在试图同时捕捉两者时对多码本设计的依赖,这增加了下游任务的架构复杂性。为了应对这些挑战,我们提出了FocalCodec,这是一种基于焦点调制的高效低比特率编解码器,它利用单个二进制码本在0.16至0.65 kbps的比特率范围内压缩语音。与当前最先进的技术相比,FocalCodec在更低的比特率下,在语音重合成和语音转换任务中提供了具有竞争力的性能,同时能有效处理多语言语音和嘈杂环境。在下游任务上的评估表明,FocalCodec成功地保留了足够的语义和声学信息,同时也非常适合生成式建模。演示样本和代码可在 https://lucadellalib.github.io/focalcodec-web/ 获取。