Recent studies have observed that intermediate layers of foundation models often yield more discriminative representations than the final layer. While initially attributed to autoregressive pretraining, this phenomenon has also been identified in models trained via supervised and discriminative self-supervised objectives. In this paper, we conduct a comprehensive study to analyze the behavior of intermediate layers in pretrained vision transformers. Through extensive linear probing experiments across a diverse set of image classification benchmarks, we find that distribution shift between pretraining and downstream data is the primary cause of performance degradation in deeper layers. Furthermore, we perform a fine-grained analysis at the module level. Our findings reveal that standard probing of transformer block outputs is suboptimal; instead, probing the activation within the feedforward network yields the best performance under significant distribution shift, whereas the normalized output of the multi-head self-attention module is optimal when the shift is weak.
翻译:近期研究发现,基础模型的中间层通常比最终层产生更具判别性的表征。虽然这一现象最初被归因于自回归预训练,但在通过监督学习和判别性自监督目标训练的模型中也得到了证实。本文通过系统性研究分析预训练视觉Transformer中间层的行为特性。基于多样化图像分类基准的广泛线性探测实验表明,预训练数据与下游数据间的分布偏移是深层性能下降的主要原因。进一步地,我们在模块层面进行了细粒度分析。研究发现:对Transformer块输出进行标准探测并非最优方案;在显著分布偏移下,前馈网络内部的激活值探测可获得最佳性能;而当分布偏移较弱时,多头自注意力模块的归一化输出则成为最优选择。