Out-of-distribution (OOD) detection is crucial for the safe deployment of neural networks. Existing CLIP-based approaches perform OOD detection by devising novel scoring functions or sophisticated fine-tuning methods. In this work, we propose SeTAR, a novel, training-free OOD detection method that leverages selective low-rank approximation of weight matrices in vision-language and vision-only models. SeTAR enhances OOD detection via post-hoc modification of the model's weight matrices using a simple greedy search algorithm. Based on SeTAR, we further propose SeTAR+FT, a fine-tuning extension optimizing model performance for OOD detection tasks. Extensive evaluations on ImageNet1K and Pascal-VOC benchmarks show SeTAR's superior performance, reducing the relatively false positive rate by up to 18.95% and 36.80% compared to zero-shot and fine-tuning baselines. Ablation studies further validate SeTAR's effectiveness, robustness, and generalizability across different model backbones. Our work offers a scalable, efficient solution for OOD detection, setting a new state-of-the-art in this area.
翻译:分布外检测对于神经网络的安全部署至关重要。现有的基于CLIP的方法通过设计新颖的评分函数或复杂的微调策略来实现分布外检测。本文提出SeTAR,一种无需训练的新型分布外检测方法,其核心在于对视觉语言模型及纯视觉模型中的权重矩阵进行选择性低秩近似。SeTAR通过简单的贪心搜索算法对模型权重矩阵进行事后修正,从而提升分布外检测性能。基于SeTAR,我们进一步提出微调扩展版本SeTAR+FT,以优化模型在分布外检测任务中的表现。在ImageNet1K和Pascal-VOC基准测试上的大量实验表明,SeTAR相比零样本基线及微调基线,相对误报率分别降低最高达18.95%和36.80%,展现出卓越性能。消融研究进一步验证了SeTAR在不同模型骨干网络上的有效性、鲁棒性与泛化能力。本研究为分布外检测提供了一个可扩展的高效解决方案,为该领域确立了新的性能标杆。