Recent literature suggests that the bigger the model, the more likely it is to converge to similar, ``universal'' representations, despite different training objectives, datasets, or modalities. While this literature shows that there is an area where model representations are similar, we study here how vision models might get to those representations -- in particular, do they also converge to the same intermediate steps and operations? We therefore study the processes that lead to convergent representations in different models. First, we quantify distance between different model representations at different stages. We follow the evolution of distances between models throughout processing, identifying the processing steps which are most different between models. We find that while layers at similar positions in different models have the most similar representations, strong differences remain. Classifier models, unlike the others, will discard information about low-level image statistics in their final layers. CNN- and transformer-based models also behave differently, with transformer models applying smoother changes to representations from one layer to the next. These distinctions clarify the level and nature of convergence between model representations, and enables a more qualitative account of the underlying processes in image models.
翻译:近期文献表明,模型规模越大,越可能收敛到相似的“通用”表征,尽管训练目标、数据集或模态存在差异。现有研究虽已证明模型表征存在相似区域,但本文旨在探究视觉模型如何达成这些表征——特别是它们是否也会收敛至相同的中间步骤与操作?为此,我们研究了不同模型中获得收敛表征的处理过程。首先,我们量化了不同模型在各处理阶段表征间的距离。通过追踪模型间距离在整个处理流程中的演化,识别出模型间差异最显著的处理步骤。研究发现:尽管不同模型中处于相似位置的层级具有最相近的表征,但仍存在显著差异。与其他模型不同,分类器模型会在最终层丢弃关于低层次图像统计的信息。基于CNN与基于Transformer的模型亦表现出不同行为:Transformer模型在层间转换时对表征施加更平滑的变更。这些差异阐明了模型表征间收敛的层次与本质,并为图像模型的底层处理过程提供了更定性的解释。