This work presents a novel method for composing and improvising music inspired by Cornelius Cardew's Treatise, using AI to bridge graphic notation and musical expression. By leveraging OpenAI's ChatGPT to interpret the abstract visual elements of Treatise, we convert these graphical images into descriptive textual prompts. These prompts are then input into MusicLDM, a pre-trained latent diffusion model designed for music generation. We introduce a technique called "outpainting," which overlaps sections of AI-generated music to create a seamless and cohesive composition. We demostrate a new perspective on performing and interpreting graphic scores, showing how AI can transform visual stimuli into sound and expand the creative possibilities in contemporary/experimental music composition. Musical pieces are available at https://bit.ly/TreatiseAI
翻译:本研究提出了一种新颖的音乐创作与即兴方法,其灵感来源于科尼利厄斯·卡杜的《论文》,利用人工智能技术架起图形记谱法与音乐表达之间的桥梁。通过运用OpenAI的ChatGPT来诠释《论文》中的抽象视觉元素,我们将这些图形图像转换为描述性文本提示。随后,这些提示被输入至MusicLDM——一个专为音乐生成设计的预训练潜在扩散模型。我们引入了一种称为“外部绘制”的技术,通过重叠AI生成的音乐片段来构建无缝且连贯的乐曲。本研究展示了一种演奏与诠释图形乐谱的新视角,揭示了人工智能如何将视觉刺激转化为声音,并拓展了当代/实验音乐创作的创意可能性。音乐作品可通过 https://bit.ly/TreatiseAI 获取。