The current landscape of research leveraging large language models (LLMs) is experiencing a surge. Many works harness the powerful reasoning capabilities of these models to comprehend various modalities, such as text, speech, images, videos, etc. They also utilize LLMs to understand human intention and generate desired outputs like images, videos, and music. However, research that combines both understanding and generation using LLMs is still limited and in its nascent stage. To address this gap, we introduce a Multi-modal Music Understanding and Generation (M$^{2}$UGen) framework that integrates LLM's abilities to comprehend and generate music for different modalities. The M$^{2}$UGen framework is purpose-built to unlock creative potential from diverse sources of inspiration, encompassing music, image, and video through the use of pretrained MERT, ViT, and ViViT models, respectively. To enable music generation, we explore the use of AudioLDM 2 and MusicGen. Bridging multi-modal understanding and music generation is accomplished through the integration of the LLaMA 2 model. Furthermore, we make use of the MU-LLaMA model to generate extensive datasets that support text/image/video-to-music generation, facilitating the training of our M$^{2}$UGen framework. We conduct a thorough evaluation of our proposed framework. The experimental results demonstrate that our model achieves or surpasses the performance of the current state-of-the-art models.
翻译:当前,利用大语言模型(LLMs)的研究领域正经历着蓬勃的发展。许多工作利用这些模型强大的推理能力来理解文本、语音、图像、视频等多种模态。它们也利用LLMs来理解人类意图,并生成图像、视频和音乐等期望的输出。然而,结合LLMs同时进行理解和生成的研究仍然有限,尚处于起步阶段。为弥补这一空白,我们提出了一个多模态音乐理解与生成(M$^{2}$UGen)框架,该框架集成了LLM理解和生成不同模态音乐的能力。M$^{2}$UGen框架旨在通过分别使用预训练的MERT、ViT和ViViT模型,从音乐、图像和视频等多样化的灵感来源中释放创作潜力。为了实现音乐生成,我们探索了使用AudioLDM 2和MusicGen。通过集成LLaMA 2模型,我们实现了多模态理解与音乐生成的桥接。此外,我们利用MU-LLaMA模型生成了支持文本/图像/视频到音乐生成的大规模数据集,从而促进了我们M$^{2}$UGen框架的训练。我们对所提出的框架进行了全面评估。实验结果表明,我们的模型达到或超越了当前最先进模型的性能。