3D Gaussian splatting (3DGS), known for its groundbreaking performance and efficiency, has become a dominant 3D representation and brought progress to many 3D vision tasks. However, in this work, we reveal a significant security vulnerability that has been largely overlooked in 3DGS: the computation cost of training 3DGS could be maliciously tampered by poisoning the input data. By developing an attack named Poison-splat, we reveal a novel attack surface where the adversary can poison the input images to drastically increase the computation memory and time needed for 3DGS training, pushing the algorithm towards its worst computation complexity. In extreme cases, the attack can even consume all allocable memory, leading to a Denial-of-Service (DoS) that disrupts servers, resulting in practical damages to real-world 3DGS service vendors. Such a computation cost attack is achieved by addressing a bi-level optimization problem through three tailored strategies: attack objective approximation, proxy model rendering, and optional constrained optimization. These strategies not only ensure the effectiveness of our attack but also make it difficult to defend with simple defensive measures. We hope the revelation of this novel attack surface can spark attention to this crucial yet overlooked vulnerability of 3DGS systems. Our code is available at https://github.com/jiahaolu97/poison-splat .
翻译:3D高斯溅射(3DGS)以其突破性的性能与效率,已成为主导性的三维表示方法,并推动了许多三维视觉任务的进展。然而,在本工作中,我们揭示了3DGS中一个长期被忽视的重大安全漏洞:通过污染输入数据,可恶意篡改3DGS训练阶段的计算成本。通过开发名为Poison-splat的攻击方法,我们揭示了一种新型攻击面:攻击者可通过污染输入图像,急剧增加3DGS训练所需的内存消耗与时间成本,将算法推向其最差计算复杂度。在极端情况下,该攻击甚至可耗尽所有可分配内存,导致拒绝服务(DoS)攻击,从而扰乱服务器运行,对现实世界的3DGS服务提供商造成实际损害。此类计算成本攻击通过求解双层优化问题实现,我们为此设计了三种定制策略:攻击目标近似、代理模型渲染及可选约束优化。这些策略不仅保证了攻击的有效性,也使其难以通过简单防御措施进行抵御。我们希望这一新型攻击面的揭露,能够引发对3DGS系统这一关键但被忽视的脆弱性的关注。代码已开源:https://github.com/jiahaolu97/poison-splat。