Music is a powerful medium for influencing listeners' emotional states, and this capacity has driven a surge of research interest in AI-based affective music generation in recent years. Many existing systems, however, are a black box which are not directly controllable, thus making these systems less flexible and adaptive to users. We present \textit{AffectMachine-Pop}, an expert system capable of generating retro-pop music according to arousal and valence values, which can either be pre-determined or based on a listener's real-time emotion states. To validate the efficacy of the system, we conducted a listening study demonstrating that AffectMachine-Pop is capable of generating affective music at target levels of arousal and valence. The system is tailored for use either as a tool for generating interactive affective music based on user input, or for incorporation into biofeedback or neurofeedback systems to assist users with emotion self-regulation.
翻译:音乐是影响听众情感状态的有力媒介,这种能力近年来推动了基于人工智能的情感音乐生成研究热潮。然而,现有许多系统属于黑箱,无法直接控制,因此这些系统的灵活性和对用户的适应性较差。我们提出 \textit{AffectMachine-Pop},这是一种能够根据唤醒度和效价生成复古流行音乐的专家系统,这些值可以预先设定,也可以基于听众的实时情绪状态。为验证系统的有效性,我们进行了一项听觉研究,证明 AffectMachine-Pop 能够在目标唤醒度和效价水平上生成情感音乐。该系统专为两种用途设计:一是作为根据用户输入生成交互式情感音乐的工具,二是集成到生物反馈或神经反馈系统中,以辅助用户进行情绪自我调节。