Recent advances in the audio language modeling (ALM) domain tackle audio understanding and text-to-audio generation as separate tasks. Very few studies attempt to unify these tasks -- an essential step toward advanced multimodal reasoning. This paper introduces U}nified Audio Language Model (UALM), which aims to unify audio understanding, text-to-audio generation, and multimodal reasoning in a single model. To achieve this goal, we first present UALM-Gen, a text-to-audio language model that directly predicts audio tokens and is comparable to state-of-the-art diffusion-based models. We then demonstrate, using proper data blending, training recipes, and inference techniques, that our single UALM model matches the quality of state-of-the-art specialized models in audio understanding, text-to-audio generation, and text reasoning. Furthermore, we present UALM-Reason, a multimodal reasoning model that utilizes both text and audio in the intermediate thinking steps to facilitate complex generation tasks. To our knowledge, this is the first demonstration in audio research of cross-modal generative reasoning, with its effectiveness confirmed by subjective evaluations.
翻译:音频语言建模(ALM)领域的最新进展将音频理解和文本到音频生成作为独立任务进行处理。极少有研究尝试统一这些任务——这是迈向高级多模态推理的关键一步。本文介绍了统一音频语言模型(UALM),旨在将音频理解、文本到音频生成以及多模态推理统一于单一模型中。为实现此目标,我们首先提出了UALM-Gen,这是一个直接预测音频标记的文本到音频语言模型,其性能可与基于扩散的最先进模型相媲美。随后我们证明,通过适当的数据混合、训练方案和推理技术,我们的单一UALM模型在音频理解、文本到音频生成和文本推理任务上均能达到当前最先进专用模型的质量。此外,我们提出了UALM-Reason,这是一种多模态推理模型,其在中间思维步骤中同时利用文本和音频信息以促进复杂生成任务。据我们所知,这是音频研究中首次展示跨模态生成式推理,其有效性已通过主观评估得到验证。