As increasingly capable open-weight large language models (LLMs) are deployed, improving their tamper resistance against unsafe modifications, whether accidental or intentional, becomes critical to minimize risks. However, there is no standard approach to evaluate tamper resistance. Varied data sets, metrics, and tampering configurations make it difficult to compare safety, utility, and robustness across different models and defenses. To this end, we introduce TamperBench, the first unified framework to systematically evaluate the tamper resistance of LLMs. TamperBench (i) curates a repository of state-of-the-art weight-space fine-tuning attacks and latent-space representation attacks; (ii) enables realistic adversarial evaluation through systematic hyperparameter sweeps per attack-model pair; and (iii) provides both safety and utility evaluations. TamperBench requires minimal additional code to specify any fine-tuning configuration, alignment-stage defense method, and metric suite while ensuring end-to-end reproducibility. We use TamperBench to evaluate 21 open-weight LLMs, including defense-augmented variants, across nine tampering threats using standardized safety and capability metrics with hyperparameter sweeps per model-attack pair. This yields novel insights, including effects of post-training on tamper resistance, that jailbreak-tuning is typically the most severe attack, and that Triplet emerges as a leading alignment-stage defense. Code is available at: https://github.com/criticalml-uw/TamperBench
翻译:随着能力日益增强的开放权重大语言模型(LLMs)被部署,提升其针对意外或蓄意不安全修改的篡改抵抗力对于最小化风险至关重要。然而,目前尚无评估篡改抵抗力的标准方法。不同的数据集、指标和篡改配置使得跨模型与防御机制的安全性、实用性和鲁棒性比较变得困难。为此,我们提出了TamperBench——首个系统化评估LLMs篡改抵抗力的统一框架。TamperBench(i)整合了最先进的权重空间微调攻击与潜在空间表示攻击库;(ii)通过对每个攻击-模型组合进行系统化超参数扫描,实现真实的对抗性评估;(iii)同时提供安全性与实用性评估。该框架仅需极少额外代码即可指定任意微调配置、对齐阶段防御方法及指标套件,并确保端到端的可复现性。我们运用TamperBench评估了21个开放权重LLM(包括增强防御的变体),针对九类篡改威胁采用标准化安全性与能力指标,并对每个模型-攻击组合进行超参数扫描。研究获得了新颖的发现,包括后训练对篡改抵抗力的影响、越狱调优通常是最严重的攻击手段,以及Triplet成为领先的对齐阶段防御方法。代码发布于:https://github.com/criticalml-uw/TamperBench