The ability of robots to manipulate objects relies heavily on their aptitude for visual perception. In domains characterized by cluttered scenes and high object variability, most methods call for vast labeled datasets, laboriously hand-annotated, with the aim of training capable models. Once deployed, the challenge of generalizing to unfamiliar objects implies that the model must evolve alongside its domain. To address this, we propose a novel framework that combines Semi-Supervised Learning (SSL) with Learning Through Interaction (LTI), allowing a model to learn by observing scene alterations and leverage visual consistency despite temporal gaps without requiring curated data of interaction sequences. As a result, our approach exploits partially annotated data through self-supervision and incorporates temporal context using pseudo-sequences generated from unlabeled still images. We validate our method on two common benchmarks, ARMBench mix-object-tote and OCID, where it achieves state-of-the-art performance. Notably, on ARMBench, we attain an $\text{AP}_{50}$ of $86.37$, almost a $20\%$ improvement over existing work, and obtain remarkable results in scenarios with extremely low annotation, achieving an $\text{AP}_{50}$ score of $84.89$ with just $1 \%$ of annotated data compared to $72$ presented in ARMBench on the fully annotated counterpart.
翻译:机器人操纵物体的能力在很大程度上依赖于其视觉感知能力。在场景杂乱且物体多样性高的领域中,大多数方法需要大量人工标注的标注数据集,以训练有效的模型。一旦部署,模型泛化到陌生物体所面临的挑战意味着模型必须随其应用领域共同演进。为此,我们提出了一种新颖的框架,将半监督学习与交互式学习相结合,使模型能够通过观察场景变化进行学习,并利用时间间隔下的视觉一致性,而无需交互序列的精心标注数据。因此,我们的方法通过自监督利用部分标注数据,并利用未标注静态图像生成的伪序列融入时序上下文。我们在两个常用基准测试ARMBench mix-object-tote和OCID上验证了我们的方法,其性能达到了最先进水平。值得注意的是,在ARMBench上,我们实现了86.37的$\text{AP}_{50}$,比现有工作提升了近$20\%$,并在标注量极低的情况下取得了显著成果:仅使用$1\%$的标注数据即获得84.89的$\text{AP}_{50}$分数,而完全标注版本在ARMBench上报告的结果为72。