Diffusion models have marked a significant milestone in the enhancement of image and video generation technologies. However, generating videos that precisely retain the shape and location of moving objects such as robots remains a challenge. This paper presents diffusion models specifically tailored to generate videos that accurately maintain the shape and location of mobile robots. This development offers substantial benefits to those working on detecting dangerous interactions between humans and robots by facilitating the creation of training data for collision detection models, circumventing the need for collecting data from the real world, which often involves legal and ethical issues. Our models incorporate techniques such as embedding accessible robot pose information and applying semantic mask regulation within the ConvNext backbone network. These techniques are designed to refine intermediate outputs, therefore improving the retention performance of shape and location. Through extensive experimentation, our models have demonstrated notable improvements in maintaining the shape and location of different robots, as well as enhancing overall video generation quality, compared to the benchmark diffusion model. Codes will be opensourced at \href{https://github.com/PengPaulWang/diffusion-robots}{Github}.
翻译:扩散模型已成为图像与视频生成技术发展的重要里程碑。然而,生成能精确保持移动物体(如机器人)形状与位置的视频仍具挑战。本文提出一种专门设计的扩散模型,用于生成能准确保持移动机器人形状与位置的视频。该进展通过为碰撞检测模型提供训练数据生成方案,有效避免了因现实世界数据收集涉及的法律与伦理问题,从而为检测人机危险交互的研究者带来显著助益。我们的模型融合了可访问机器人姿态信息嵌入技术,并在ConvNext骨干网络中应用语义掩码调控机制。这些技术旨在优化中间输出特征,进而提升形状与位置的保持性能。大量实验表明,相较于基准扩散模型,我们的模型在保持不同机器人形状与位置方面取得显著改进,同时提升了整体视频生成质量。代码将在\href{https://github.com/PengPaulWang/diffusion-robots}{Github}开源。