The current success of Reinforcement Learning algorithms for its performance in complex environments has inspired many recent theoretical approaches to cognitive science. Artistic environments are studied within the cognitive science community as rich, natural, multi-sensory, multi-cultural environments. In this work, we propose the introduction of Reinforcement Learning for improving the control of artistic robot applications. Deep Q-learning Neural Networks (DQN) is one of the most successful algorithms for the implementation of Reinforcement Learning in robotics. DQN methods generate complex control policies for the execution of complex robot applications in a wide set of environments. Current art painting robot applications use simple control laws that limits the adaptability of the frameworks to a set of simple environments. In this work, the introduction of DQN within an art painting robot application is proposed. The goal is to study how the introduction of a complex control policy impacts the performance of a basic art painting robot application. The main expected contribution of this work is to serve as a first baseline for future works introducing DQN methods for complex art painting robot frameworks. Experiments consist of real world executions of human drawn sketches using the DQN generated policy and TEO, the humanoid robot. Results are compared in terms of similarity and obtained reward with respect to the reference inputs
翻译:近年来,强化学习算法在复杂环境中的卓越表现激发了认知科学领域的诸多理论探索。在认知科学学界,艺术类环境作为富含多感官体验、跨文化元素的天然复杂环境而备受关注。本研究提出将强化学习引入艺术机器人控制系统的优化方案。深度Q学习神经网络作为机器人领域成功实施强化学习的核心算法,能够生成复杂控制策略以支持机器人执行各类复杂任务。当前艺术绘画机器人应用受限于简易控制律,导致框架适应性局限于简单环境。本文提出将深度Q学习网络引入艺术绘画机器人系统,旨在探究复杂控制策略对基础艺术绘画机器人性能的影响机制。本研究的主要贡献在于为后续将深度Q学习网络应用于复杂艺术绘画机器人框架的研究建立初始基准。实验采用人形机器人TEO执行深度Q学习策略生成的真实手绘草图,通过对比参考输入的相似度与奖励值进行效果评估。