Understanding the reliability of natural language generation is critical for deploying foundation models in security-sensitive domains. While certified poisoning defenses provide provable robustness bounds for classification tasks, they are fundamentally ill-equipped for autoregressive generation: they cannot handle sequential predictions or the exponentially large output space of language models. To establish a framework for certified natural language generation, we formalize two security properties: stability (robustness to any change in generation) and validity (robustness to targeted, harmful changes in generation). We introduce Targeted Partition Aggregation (TPA), the first algorithm to certify validity/targeted attacks by computing the minimum poisoning budget needed to induce a specific harmful class, token, or phrase. Further, we extend TPA to provide tighter guarantees for multi-turn generations using mixed integer linear programming (MILP). Empirically, we demonstrate TPA's effectiveness across diverse settings including: certifying validity of agent tool-calling when adversaries modify up to 0.5% of the dataset and certifying 8-token stability horizons in preference-based alignment. Though inference-time latency remains an open challenge, our contributions enable certified deployment of language models in security-critical applications.
翻译:理解自然语言生成的可靠性对于在安全敏感领域部署基础模型至关重要。虽然经过认证的投毒防御能为分类任务提供可证明的鲁棒性边界,但它们本质上无法适用于自回归生成任务:既无法处理序列预测,也无法应对语言模型指数级庞大的输出空间。为建立经过认证的自然语言生成框架,我们形式化定义了两种安全属性:稳定性(对生成过程中任意变化的鲁棒性)与有效性(对生成过程中定向有害变化的鲁棒性)。我们提出了目标分区聚合算法,这是首个通过计算诱导特定有害类别、标记或短语所需的最小投毒预算来认证有效性/定向攻击的算法。此外,我们通过混合整数线性规划扩展了目标分区聚合算法,为多轮次生成提供更严格的保证。实证研究表明,目标分区聚合算法在多种场景中均展现有效性:包括在对抗者修改高达0.5%数据集时认证智能体工具调用的有效性,以及在基于偏好的对齐任务中认证8个标记的稳定性边界。尽管推理时延仍是待解决的挑战,我们的研究成果为语言模型在安全关键应用中的认证部署提供了技术支撑。