Recently, diffusion models have shown their impressive ability in visual generation tasks. Besides static images, more and more research attentions have been drawn to the generation of realistic videos. The video generation not only has a higher requirement for the quality, but also brings a challenge in ensuring the video continuity. Among all the video generation tasks, human-involved contents, such as human dancing, are even more difficult to generate due to the high degrees of freedom associated with human motions. In this paper, we propose a novel framework, named as DANCER (Dance ANimation via Condition Enhancement and Rendering with Diffusion Model), for realistic single-person dance synthesis based on the most recent stable video diffusion model. As the video generation is generally guided by a reference image and a video sequence, we introduce two important modules into our framework to fully benefit from the two inputs. More specifically, we design an Appearance Enhancement Module (AEM) to focus more on the details of the reference image during the generation, and extend the motion guidance through a Pose Rendering Module (PRM) to capture pose conditions from extra domains. To further improve the generation capability of our model, we also collect a large amount of video data from Internet, and generate a novel datasetTikTok-3K to enhance the model training. The effectiveness of the proposed model has been evaluated through extensive experiments on real-world datasets, where the performance of our model is superior to that of the state-of-the-art methods. All the data and codes will be released upon acceptance.
翻译:近年来,扩散模型在视觉生成任务中展现出卓越能力。除静态图像外,越来越多的研究关注点转向真实感视频生成。视频生成不仅对生成质量有更高要求,还需确保视频的时序连贯性。在所有视频生成任务中,涉及人体内容(如舞蹈动作)的生成因人体运动的高自由度而更具挑战性。本文提出名为DANCER(基于扩散模型的舞蹈动画条件增强与渲染)的新框架,用于基于最新稳定视频扩散模型实现真实感单人舞蹈合成。由于视频生成通常由参考图像和视频序列引导,我们在框架中引入两个关键模块以充分利用这两种输入:设计外观增强模块(AEM)以在生成过程中更聚焦参考图像细节,并通过姿态渲染模块(PRM)扩展运动引导以捕获跨域姿态条件。为进一步提升模型生成能力,我们从互联网收集大量视频数据,构建新型数据集TikTok-3K以增强模型训练。通过在真实数据集上的大量实验评估,所提模型性能优于现有最优方法。所有数据与代码将在论文录用后开源。