Masked Image Modeling (MIM)-based models, such as SdAE, CAE, GreenMIM, and MixAE, have explored different strategies to enhance the performance of Masked Autoencoders (MAE) by modifying prediction, loss functions, or incorporating additional architectural components. In this paper, we propose an enhanced approach that boosts MAE performance by integrating pseudo labelling for both class and data tokens, alongside replacing the traditional pixel-level reconstruction with token-level reconstruction. This strategy uses cluster assignments as pseudo labels to promote instance-level discrimination within the network, while token reconstruction requires generation of discrete tokens encapturing local context. The targets for pseudo labelling and reconstruction needs to be generated by a teacher network. To disentangle the generation of target pseudo labels and the reconstruction of the token features, we decouple the teacher into two distinct models, where one serves as a labelling teacher and the other as a reconstruction teacher. This separation proves empirically superior to a single teacher, while having negligible impact on throughput and memory consumption. Incorporating pseudo-labelling as an auxiliary task has demonstrated notable improvements in ImageNet-1K and other downstream tasks, including classification, semantic segmentation, and detection.
翻译:基于掩码图像建模(MIM)的模型,如SdAE、CAE、GreenMIM和MixAE,已通过修改预测或损失函数、或引入额外的架构组件等不同策略来提升掩码自编码器(MAE)的性能。本文提出一种增强方法,通过为类别令牌和数据令牌集成伪标记,同时用令牌级重建替代传统的像素级重建,从而提升MAE的性能。该策略利用聚类分配作为伪标签,以促进网络内的实例级区分,而令牌重建则需要生成能捕捉局部上下文的离散令牌。伪标记和重建的目标需由教师网络生成。为解耦目标伪标签的生成与令牌特征的重建,我们将教师网络解耦为两个独立的模型:一个作为标记教师,另一个作为重建教师。经验证明,这种分离方式优于单一教师模型,且对吞吐量和内存消耗的影响可忽略不计。将伪标记作为辅助任务集成,已在ImageNet-1K及其他下游任务(包括分类、语义分割和检测)中展现出显著的性能提升。