Unsupervised object-centric learning aims to decompose scenes into interpretable object entities, termed slots. Slot-based auto-encoders stand out as a prominent method for this task. Within them, crucial aspects include guiding the encoder to generate object-specific slots and ensuring the decoder utilizes them during reconstruction. This work introduces two novel techniques, (i) an attention-based self-training approach, which distills superior slot-based attention masks from the decoder to the encoder, enhancing object segmentation, and (ii) an innovative patch-order permutation strategy for autoregressive transformers that strengthens the role of slot vectors in reconstruction. The effectiveness of these strategies is showcased experimentally. The combined approach significantly surpasses prior slot-based autoencoder methods in unsupervised object segmentation, especially with complex real-world images. We provide the implementation code at https://github.com/gkakogeorgiou/spot .
翻译:无监督对象中心学习旨在将场景分解为可解释的对象实体,即槽位。基于槽位的自编码器是该任务的主流方法。其中关键方面包括:引导编码器生成对象特定槽位,并确保解码器在重建过程中利用这些槽位。本文提出两种新技术:(i) 基于注意力的自训练方法,将解码器产生的更优槽位注意力掩码蒸馏至编码器,从而增强对象分割性能;(ii) 面向自回归Transformer的块序排列策略,强化槽位向量在重建中的作用。实验验证了上述策略的有效性。所提出的组合方法在无监督对象分割任务上显著超越此前基于槽位的自编码器方法,尤其针对复杂真实图像表现优异。我们已在 https://github.com/gkakogeorgiou/spot 提供实现代码。