Recent advancements in text-to-3D generation can generate neural radiance fields (NeRFs) with score distillation sampling, enabling 3D asset creation without real-world data capture. With the rapid advancement in NeRF generation quality, protecting the copyright of the generated NeRF has become increasingly important. While prior works can watermark NeRFs in a post-generation way, they suffer from two vulnerabilities. First, a delay lies between NeRF generation and watermarking because the secret message is embedded into the NeRF model post-generation through fine-tuning. Second, generating a non-watermarked NeRF as an intermediate creates a potential vulnerability for theft. To address both issues, we propose Dreamark to embed a secret message by backdooring the NeRF during NeRF generation. In detail, we first pre-train a watermark decoder. Then, the Dreamark generates backdoored NeRFs in a way that the target secret message can be verified by the pre-trained watermark decoder on an arbitrary trigger viewport. We evaluate the generation quality and watermark robustness against image- and model-level attacks. Extensive experiments show that the watermarking process will not degrade the generation quality, and the watermark achieves 90+% accuracy among both image-level attacks (e.g., Gaussian noise) and model-level attacks (e.g., pruning attack).
翻译:近年来,文本到3D生成技术的进展使得通过分数蒸馏采样生成神经辐射场(NeRFs)成为可能,从而无需真实世界数据采集即可创建3D资产。随着NeRF生成质量的快速提升,保护生成NeRF的版权变得日益重要。现有工作虽能在生成后对NeRF进行水印嵌入,但存在两个固有缺陷:其一,由于需通过微调将秘密信息嵌入已生成的NeRF模型,导致生成与水印嵌入之间存在时间延迟;其二,生成未含水印的中间态NeRF会形成潜在盗用风险。为解决这些问题,我们提出DreaMark方法,通过在NeRF生成过程中植入后门来嵌入秘密信息。具体而言,我们首先预训练一个水印解码器,随后DreaMark通过特定方式生成带后门的NeRF,使得任意触发视口都能通过预训练水印解码器验证目标秘密信息。我们评估了生成质量及水印对图像级与模型级攻击的鲁棒性。大量实验表明,水印处理不会降低生成质量,且在图像级攻击(如高斯噪声)和模型级攻击(如剪枝攻击)下均能保持90%以上的水印提取准确率。