We propose a simple yet effective pipeline for stylizing a 3D scene, harnessing the power of 2D image diffusion models. Given a NeRF model reconstructed from a set of multi-view images, we perform 3D style transfer by refining the source NeRF model using stylized images generated by a style-aligned image-to-image diffusion model. Given a target style prompt, we first generate perceptually similar multi-view images by leveraging a depth-conditioned diffusion model with an attention-sharing mechanism. Next, based on the stylized multi-view images, we propose to guide the style transfer process with the sliced Wasserstein loss based on the feature maps extracted from a pre-trained CNN model. Our pipeline consists of decoupled steps, allowing users to test various prompt ideas and preview the stylized 3D result before proceeding to the NeRF fine-tuning stage. We demonstrate that our method can transfer diverse artistic styles to real-world 3D scenes with competitive quality.
翻译:我们提出了一种简单而有效的流程,用于对三维场景进行风格化,其利用了二维图像扩散模型的能力。给定一个从一组多视角图像重建的NeRF模型,我们通过使用一个风格对齐的图像到图像扩散模型生成的风格化图像来优化源NeRF模型,从而实现3D风格迁移。给定一个目标风格提示,我们首先利用一个具有注意力共享机制的深度条件扩散模型,生成感知上相似的多视角图像。接着,基于这些风格化的多视角图像,我们提出使用基于从预训练CNN模型中提取的特征图计算的切片Wasserstein损失来指导风格迁移过程。我们的流程由解耦的步骤组成,允许用户在进入NeRF微调阶段之前测试各种提示想法并预览风格化的3D结果。我们证明了我们的方法能够以具有竞争力的质量将多样化的艺术风格迁移到真实世界的3D场景中。