Tactile information plays a crucial role in human manipulation tasks and has recently garnered increasing attention in robotic manipulation. However, existing approaches mostly focus on the alignment of visual and tactile features and the integration mechanism tends to be direct concatenation. Consequently, they struggle to effectively cope with occluded scenarios due to neglecting the inherent complementary nature of both modalities and the alignment may not be exploited enough, limiting the potential of their real-world deployment. In this paper, we present ViTaS, a simple yet effective framework that incorporates both visual and tactile information to guide the behavior of an agent. We introduce Soft Fusion Contrastive Learning, an advanced version of conventional contrastive learning method and a CVAE module to utilize the alignment and complementarity within visuo-tactile representations. We demonstrate the effectiveness of our method in 12 simulated and 3 real-world environments, and our experiments show that ViTaS significantly outperforms existing baselines. Project page: https://skyrainwind.github.io/ViTaS/index.html.
翻译:触觉信息在人类操作任务中扮演着关键角色,并最近在机器人操作领域受到越来越多的关注。然而,现有方法大多侧重于视觉与触觉特征的对齐,其融合机制往往采用直接拼接的方式。因此,由于忽略了两种模态固有的互补特性,且对齐可能未被充分利用,这些方法难以有效应对遮挡场景,限制了其在实际部署中的潜力。本文提出ViTaS,一个简单而有效的框架,它融合了视觉和触觉信息来引导智能体的行为。我们引入了软融合对比学习——一种对传统对比学习方法的改进版本,以及一个CVAE模块,以充分利用视觉-触觉表征内部的对齐性和互补性。我们在12个模拟环境和3个真实世界环境中验证了方法的有效性,实验表明ViTaS显著优于现有基线。项目页面:https://skyrainwind.github.io/ViTaS/index.html。