Out-of-distribution (OOD) detection is crucial for the safe deployment of neural networks. Existing CLIP-based approaches perform OOD detection by devising novel scoring functions or sophisticated fine-tuning methods. In this work, we propose SeTAR, a novel, training-free OOD detection method that leverages selective low-rank approximation of weight matrices in vision-language and vision-only models. SeTAR enhances OOD detection via post-hoc modification of the model's weight matrices using a simple greedy search algorithm. Based on SeTAR, we further propose SeTAR+FT, a fine-tuning extension optimizing model performance for OOD detection tasks. Extensive evaluations on ImageNet1K and Pascal-VOC benchmarks show SeTAR's superior performance, reducing the false positive rate by up to 18.95% and 36.80% compared to zero-shot and fine-tuning baselines. Ablation studies further validate our approach's effectiveness, robustness, and generalizability across different model backbones. Our work offers a scalable, efficient solution for OOD detection, setting a new state-of-the-art in this area.
翻译:分布外(OOD)检测对于神经网络的安全部署至关重要。现有的基于CLIP的方法通过设计新颖的评分函数或复杂的微调方法进行OOD检测。本文提出SeTAR,一种无需训练的新型OOD检测方法,它利用视觉-语言模型和纯视觉模型中权重矩阵的选择性低秩逼近。SeTAR通过使用简单的贪心搜索算法对模型权重矩阵进行事后修正,从而增强OOD检测能力。基于SeTAR,我们进一步提出SeTAR+FT,这是一种针对OOD检测任务优化模型性能的微调扩展方法。在ImageNet1K和Pascal-VOC基准上的大量评估表明,SeTAR性能优异,与零样本和微调基线相比,误报率分别降低了最高达18.95%和36.80%。消融研究进一步验证了我们方法在不同模型骨干网络上的有效性、鲁棒性和泛化能力。我们的工作为OOD检测提供了一个可扩展、高效的解决方案,为该领域树立了新的技术标杆。