As frontier AIs become more powerful and costly to develop, adversaries have increasing incentives to steal model weights by mounting exfiltration attacks. In this work, we consider exfiltration attacks where an adversary attempts to sneak model weights out of a datacenter over a network. While exfiltration attacks are multi-step cyber attacks, we demonstrate that a single factor, the compressibility of model weights, significantly heightens exfiltration risk for large language models (LLMs). We tailor compression specifically for exfiltration by relaxing decompression constraints and demonstrate that attackers could achieve 16x to 100x compression with minimal trade-offs, reducing the time it would take for an attacker to illicitly transmit model weights from the defender's server from months to days. Finally, we study defenses designed to reduce exfiltration risk in three distinct ways: making models harder to compress, making them harder to 'find,' and tracking provenance for post-attack analysis using forensic watermarks. While all defenses are promising, the forensic watermark defense is both effective and cheap, and therefore is a particularly attractive lever for mitigating weight-exfiltration risk.
翻译:随着前沿人工智能模型日益强大且开发成本不断攀升,攻击者通过发起数据外泄攻击窃取模型权重的动机日益增强。本研究探讨攻击者试图通过网络将模型权重从数据中心秘密转移的外泄攻击。尽管外泄攻击是多步骤的网络攻击,我们证明单一因素——模型权重的可压缩性——会显著加剧大型语言模型(LLMs)的外泄风险。我们通过放宽解压缩约束专门针对外泄场景定制压缩方案,并证明攻击者能够以极小代价实现16倍至100倍的压缩比,从而将攻击者从防御方服务器非法传输模型权重所需的时间从数月缩短至数日。最后,我们从三个不同维度研究降低外泄风险的防御策略:增强模型的抗压缩性、提高模型的隐蔽性,以及通过取证水印追踪溯源以支持攻击后分析。尽管所有防御策略均具潜力,取证水印防御因其高效性与低成本特性,成为缓解权重外泄风险极具吸引力的技术手段。