Large Language models (LLM) have demonstrated the capability to handle a variety of generative tasks. This paper presents the UniAudio system, which, unlike prior task-specific approaches, leverages LLM techniques to generate multiple types of audio (including speech, sounds, music, and singing) with given input conditions. UniAudio 1) first tokenizes all types of target audio along with other condition modalities, 2) concatenates source-target pair as a single sequence, and 3) performs next-token prediction using LLM. Also, a multi-scale Transformer model is proposed to handle the overly long sequences caused by the residual vector quantization based neural codec in tokenization. Training of UniAudio is scaled up to 165K hours of audio and 1B parameters, based on all generative tasks, aiming to obtain sufficient prior knowledge not only in the intrinsic properties of audio but also the inter-relationship between audio and other modalities. Therefore, the trained UniAudio model has the potential to become a foundation model for universal audio generation: it shows strong capability in all trained tasks and can seamlessly support new audio generation tasks after simple fine-tuning. Experiments demonstrate that UniAudio achieves state-of-the-art or at least competitive results on most of the 11 tasks. Demo and code are released at https://github.com/yangdongchao/UniAudio
翻译:大型语言模型(LLM)已展现出处理多种生成任务的能力。本文提出的UniAudio系统,不同于以往针对特定任务的方法,利用LLM技术根据给定输入条件生成多种类型的音频(包括语音、音效、音乐和歌声)。UniAudio 1)首先对所有类型的目标音频及其他条件模态进行标记化处理,2)将源-目标对拼接为单一序列,3)使用LLM进行下一标记预测。同时,本文提出了一种多尺度Transformer模型,以处理因基于残差向量量化的神经编解码器在标记化过程中产生的超长序列。UniAudio的训练基于所有生成任务,规模扩展至16.5万小时音频数据和10亿参数,旨在充分获取音频内在特性及音频与其他模态间相互关系的先验知识。因此,训练后的UniAudio模型有潜力成为通用音频生成的基础模型:它在所有已训练任务中均表现出强大能力,并能在简单微调后无缝支持新的音频生成任务。实验表明,在11项任务中的大多数任务上,UniAudio取得了最先进或至少具有竞争力的结果。演示与代码发布于https://github.com/yangdongchao/UniAudio