Vision-language models (VLMs) such as CLIP have demonstrated remarkable zero-shot generalization, yet remain highly vulnerable to adversarial examples (AEs). While test-time defenses are promising, existing methods fail to provide sufficient robustness against strong attacks and are often hampered by high inference latency and task-specific applicability. To address these limitations, we start by investigating the intrinsic properties of AEs, which reveals that AEs exhibit severe feature inconsistency under progressive frequency attenuation. We further attribute this to the model's inherent spectral bias. Leveraging this insight, we propose an efficient test-time defense named Contrastive Spectral Rectification (CSR). CSR optimizes a rectification perturbation to realign the input with the natural manifold under a spectral-guided contrastive objective, which is applied input-adaptively. Extensive experiments across 16 classification benchmarks demonstrate that CSR outperforms the SOTA by an average of 18.1% against strong AutoAttack with modest inference overhead. Furthermore, CSR exhibits broad applicability across diverse visual tasks. Code is available at https://github.com/Summu77/CSR.
翻译:视觉语言模型(如CLIP)已展现出卓越的零样本泛化能力,但仍极易受到对抗样本的攻击。虽然测试时防御方法前景广阔,但现有方法无法对强攻击提供足够的鲁棒性,且常受限于高推理延迟与任务特定的适用性。为突破这些局限,我们首先探究对抗样本的内在特性,发现其在渐进频率衰减下表现出严重的特征不一致性。我们进一步将其归因于模型固有的频谱偏置。基于此洞见,我们提出一种高效的测试时防御方法——对比频谱校正。该方法通过频谱引导的对比目标优化校正扰动,使输入在自然流形上重新对齐,并实现输入自适应调整。在16个分类基准上的大量实验表明,CSR在强AutoAttack攻击下平均优于当前最优方法18.1%,且推理开销可控。此外,CSR在多种视觉任务中展现出广泛的适用性。代码发布于https://github.com/Summu77/CSR。