Music is a universal language that can communicate emotions and feelings. It forms an essential part of the whole spectrum of creative media, ranging from movies to social media posts. Machine learning models that can synthesize music are predominantly conditioned on textual descriptions of it. Inspired by how musicians compose music not just from a movie script, but also through visualizations, we propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music. MeLFusion is a text-to-music diffusion model with a novel "visual synapse", which effectively infuses the semantics from the visual modality into the generated music. To facilitate research in this area, we introduce a new dataset MeLBench, and propose a new evaluation metric IMSM. Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music, measured both objectively and subjectively, with a relative gain of up to 67.98% on the FAD score. We hope that our work will gather attention to this pragmatic, yet relatively under-explored research area.
翻译:音乐是一种能够传达情感与感受的通用语言。从电影到社交媒体帖子,音乐构成了整个创意媒体领域的重要组成部分。现有的音乐合成机器学习模型主要依赖于文本描述作为条件输入。受音乐家不仅从电影剧本、更通过视觉意象创作音乐的启发,我们提出MeLFusion模型,该模型能够有效利用文本描述及对应图像的线索来合成音乐。MeLFusion是一种具备新型“视觉突触”的文本到音乐扩散模型,可有效将视觉模态的语义信息注入生成的音乐中。为促进该领域研究,我们构建了全新数据集MeLBench,并提出新的评估指标IMSM。详尽的实验评估表明,在音乐合成流程中加入视觉信息能显著提升生成音乐的质量——通过主客观测量均得到验证,其在FAD分数上最高可获得67.98%的相对增益。我们希望本研究能吸引更多学者关注这一实用却尚未被充分探索的研究领域。