Recent approaches in machine learning often solve a task using a composition of multiple models or agentic architectures. When targeting a composed system with adversarial attacks, it might not be computationally or informationally feasible to train an end-to-end proxy model or a proxy model for every component of the system. We introduce a method to craft an adversarial attack against the overall multi-model system when we only have a proxy model for the final black-box model, and when the transformation applied by the initial models can make the adversarial perturbations ineffective. Current methods handle this by applying many copies of the first model/transformation to an input and then re-use a standard adversarial attack by averaging gradients, or learning a proxy model for both stages. To our knowledge, this is the first attack specifically designed for this threat model and our method has a substantially higher attack success rate (80% vs 25%) and contains 9.4% smaller perturbations (MSE) compared to prior state-of-the-art methods. Our experiments focus on a supervised image pipeline, but we are confident the attack will generalize to other multi-model settings [e.g. a mix of open/closed source foundation models], or agentic systems
翻译:机器学习领域的最新方法通常通过组合多个模型或智能体架构来解决任务。在对组合系统实施对抗攻击时,训练端到端代理模型或为系统每个组件训练代理模型可能在计算或信息层面不可行。本文提出一种针对整体多模型系统构建对抗攻击的方法,该方法仅需获取最终黑盒模型的代理模型,且能应对初始模型执行的变换可能使对抗扰动失效的情况。现有方法通过以下方式处理该问题:对输入施加多次第一模型/变换后复用标准对抗攻击进行梯度平均,或为两个阶段学习代理模型。据我们所知,这是首个专门针对该威胁模型设计的攻击方法,与现有最优方法相比,我们的方法实现了显著更高的攻击成功率(80%对比25%),且扰动幅度(均方误差)降低了9.4%。本实验聚焦于监督式图像处理流程,但我们确信该攻击可推广至其他多模型场景[例如开源/闭源基础模型的混合系统]或智能体系统。