Amid the rising intersection of generative AI and human artistic processes, this study probes the critical yet less-explored terrain of alignment in human-centric automatic song composition. We propose a novel task of Colloquial Description-to-Song Generation, which focuses on aligning the generated content with colloquial human expressions. This task is aimed at bridging the gap between colloquial language understanding and auditory expression within an AI model, with the ultimate goal of creating songs that accurately satisfy human auditory expectations and structurally align with musical norms. Current datasets are limited due to their narrow descriptive scope, semantic gaps and inaccuracies. To overcome data scarcity in this domain, we present the Caichong Music Dataset (CaiMD). CaiMD is manually annotated by both professional musicians and amateurs, offering diverse perspectives and a comprehensive understanding of colloquial descriptions. Unlike existing datasets pre-set with expert annotations or auto-generated ones with inherent biases, CaiMD caters more sufficiently to our purpose of aligning AI-generated music with widespread user-desired results. Moreover, we propose an innovative single-stage framework called MuDiT/MuSiT for enabling effective human-machine alignment in song creation. This framework not only achieves cross-modal comprehension between colloquial language and auditory music perceptions but also ensures generated songs align with user-desired results. MuDiT/MuSiT employs one DiT/SiT model for end-to-end generation of musical components like melody, harmony, rhythm, vocals, and instrumentation. The approach ensures harmonious sonic cohesiveness amongst all generated musical components, facilitating better resonance with human auditory expectations.
翻译:随着生成式人工智能与人类艺术创作过程的交叉领域日益兴起,本研究探讨了以人为中心的自动歌曲创作中对齐这一关键但尚未充分探索的领域。我们提出了“口语化描述到歌曲生成”这一新任务,其核心在于使生成内容与人类口语化表达实现对齐。该任务旨在弥合人工智能模型内部对口语化语言的理解与听觉表达之间的鸿沟,最终目标是创作出能准确满足人类听觉预期且在结构上符合音乐规范的歌曲。现有数据集因描述范围狭窄、语义存在断层及不准确性而受限。为克服该领域的数据稀缺问题,我们提出了Caichong音乐数据集(CaiMD)。CaiMD由专业音乐从业者与业余爱好者共同进行人工标注,提供了多元视角及对口语化描述的全面理解。相较于预设专家标注的现有数据集或存在固有偏见的自动生成数据集,CaiMD能更充分地满足我们将AI生成音乐与广泛用户期望结果相对齐的研究目标。此外,我们提出了名为MuDiT/MuSiT的创新单阶段框架,以实现歌曲创作中有效的人机对齐。该框架不仅实现了口语化语言与听觉音乐感知的跨模态理解,同时确保生成的歌曲与用户期望结果保持一致。MuDiT/MuSiT采用单一DiT/SiT模型进行端到端的音乐要素生成,涵盖旋律、和声、节奏、人声及器乐编排。该方法保证了所有生成音乐要素间的和谐声学连贯性,从而更好地实现与人类听觉预期的共鸣。