Augmenting large language models (LLMs) to understand audio -- including non-speech sounds and non-verbal speech -- is critically important for diverse real-world applications of LLMs. In this paper, we propose Audio Flamingo, a novel audio language model with 1) strong audio understanding abilities, 2) the ability to quickly adapt to unseen tasks via in-context learning and retrieval, and 3) strong multi-turn dialogue abilities. We introduce a series of training techniques, architecture design, and data strategies to enhance our model with these abilities. Extensive evaluations across various audio understanding tasks confirm the efficacy of our method, setting new state-of-the-art benchmarks. Our demo website is https://audioflamingo.github.io/ and the code is open-sourced at https://github.com/NVIDIA/audio-flamingo.
翻译:增强大语言模型(LLMs)对音频(包括非语音声音与非言语语音)的理解能力,对于LLMs在多样化现实场景中的应用至关重要。本文提出Audio Flamingo,一种新型音频语言模型,其具备:1)强大的音频理解能力;2)通过上下文学习与检索快速适应未见任务的能力;3)强大的多轮对话能力。我们引入了一系列训练技术、架构设计与数据策略,以赋予模型上述能力。在多种音频理解任务上的广泛评估验证了本方法的有效性,并创造了新的最优性能基准。我们的演示网站为 https://audioflamingo.github.io/,代码已在 https://github.com/NVIDIA/audio-flamingo 开源。