Slot Attention, an approach that binds different objects in a scene to a set of "slots", has become a leading method in unsupervised object-centric learning. Most methods assume a fixed slot count K, and to better accommodate the dynamic nature of object cardinality, a few works have explored K-adaptive variants. However, existing K-adaptive methods still suffer from two limitations. First, they do not explicitly constrain slot-binding quality, so low-quality slots lead to ambiguous feature attribution. Second, adding a slot-count penalty to the reconstruction objective creates conflicting optimization goals between reducing the number of active slots and maintaining reconstruction fidelity. As a result, they still lag significantly behind strong K-fixed baselines. To address these challenges, we propose Quality-Guided K-Adaptive Slot Attention (QASA). First, we decouple slot selection from reconstruction, eliminating the mutual constraints between the two objectives. Then, we propose an unsupervised Slot-Quality metric to assess per-slot quality, providing a principled signal for fine-grained slot--object binding. Based on this metric, we design a Quality-Guided Slot Selection scheme that dynamically selects a subset of high-quality slots and feeds them into our newly designed gated decoder for reconstruction during training. At inference, token-wise competition on slot attention yields a K-adaptive outcome. Experiments show that QASA substantially outperforms existing K-adaptive methods on both real and synthetic datasets. Moreover, on real-world datasets QASA surpasses K-fixed methods.
翻译:槽注意力是一种将场景中不同物体绑定到一组“槽”上的方法,已成为无监督物体中心学习领域的主流方法。大多数方法假设槽的数量K是固定的;为了更好地适应物体数量的动态变化,已有少数工作探索了K自适应变体。然而,现有的K自适应方法仍存在两个局限性。首先,它们未显式约束槽绑定质量,导致低质量槽引起特征归属的模糊性。其次,在重建目标中添加槽数量惩罚项,会在减少活跃槽数量与保持重建保真度之间产生冲突的优化目标。因此,这些方法仍显著落后于强大的K固定基线方法。为解决这些挑战,我们提出了质量引导K自适应槽注意力(QASA)。首先,我们将槽选择与重建过程解耦,消除了两个目标间的相互制约。接着,我们提出一种无监督的槽质量度量标准,用于评估每个槽的质量,为细粒度的槽-物体绑定提供原则性信号。基于此度量标准,我们设计了一种质量引导槽选择方案,该方案动态选择高质量槽的子集,并在训练期间将其馈入我们新设计的门控解码器进行重建。在推理阶段,槽注意力上的令牌级竞争产生K自适应结果。实验表明,QASA在真实数据集和合成数据集上均显著优于现有的K自适应方法。此外,在真实世界数据集上,QASA的表现超越了K固定方法。