Frequency shortcuts refer to specific frequency patterns that models heavily rely on for correct classification. Previous studies have shown that models trained on small image datasets often exploit such shortcuts, potentially impairing their generalization performance. However, existing methods for identifying frequency shortcuts require expensive computations and become impractical for analyzing models trained on large datasets. In this work, we propose the first approach to more efficiently analyze frequency shortcuts at a larger scale. We show that both CNN and transformer models learn frequency shortcuts on ImageNet. We also expose that frequency shortcut solutions can yield good performance on out-of-distribution (OOD) test sets which largely retain texture information. However, these shortcuts, mostly aligned with texture patterns, hinder model generalization on rendition-based OOD test sets. These observations suggest that current OOD evaluations often overlook the impact of frequency shortcuts on model generalization. Future benchmarks could thus benefit from explicitly assessing and accounting for these shortcuts to build models that generalize across a broader range of OOD scenarios.
翻译:频率捷径指模型严重依赖以实现正确分类的特定频率模式。先前研究表明,在小规模图像数据集上训练的模型常利用此类捷径,可能损害其泛化性能。然而,现有识别频率捷径的方法需要高昂计算成本,在分析大规模数据集训练的模型时变得不切实际。本工作首次提出可更高效分析大规模频率捷径的方法。我们证明CNN与Transformer模型在ImageNet上均会学习频率捷径。同时揭示频率捷径解决方案能在很大程度上保留纹理信息的分布外测试集上获得良好性能。但这些主要与纹理模式对齐的捷径会阻碍模型在基于渲染的分布外测试集上的泛化能力。这些发现表明当前分布外评估常忽视频率捷径对模型泛化的影响。未来基准测试可通过显式评估和考量这些捷径,构建能在更广泛分布外场景中泛化的模型。