Proprioception is the "sixth sense" that detects limb postures with motor neurons. It requires a natural integration between the musculoskeletal systems and sensory receptors, which is challenging among modern robots that aim for lightweight, adaptive, and sensitive designs at a low cost. Here, we present the Soft Polyhedral Network with an embedded vision for physical interactions, capable of adaptive kinesthesia and viscoelastic proprioception by learning kinetic features. This design enables passive adaptations to omni-directional interactions, visually captured by a miniature high-speed motion tracking system embedded inside for proprioceptive learning. The results show that the soft network can infer real-time 6D forces and torques with accuracies of 0.25/0.24/0.35 N and 0.025/0.034/0.006 Nm in dynamic interactions. We also incorporate viscoelasticity in proprioception during static adaptation by adding a creep and relaxation modifier to refine the predicted results. The proposed soft network combines simplicity in design, omni-adaptation, and proprioceptive sensing with high accuracy, making it a versatile solution for robotics at a low cost with more than 1 million use cycles for tasks such as sensitive and competitive grasping, and touch-based geometry reconstruction. This study offers new insights into vision-based proprioception for soft robots in adaptive grasping, soft manipulation, and human-robot interaction.
翻译:本体感知是通过运动神经元检测肢体姿态的“第六感”。它要求肌肉骨骼系统与感觉感受器之间实现自然的融合,这对于追求轻量化、自适应、高灵敏度且低成本设计的现代机器人而言颇具挑战。本文提出一种嵌入视觉的软多面体网络,能够通过学习动力学特征实现自适应运动觉与粘弹性本体感知。该设计通过内置微型高速运动追踪系统视觉捕捉全向交互作用,实现被动自适应与本体感知学习。实验结果表明,该软网络在动态交互中能实时推断六维力与力矩,精度分别达到0.25/0.24/0.35 N与0.025/0.034/0.006 Nm。我们还在静态适应阶段通过引入蠕变与松弛修正器来优化预测结果,从而将粘弹性特性融入本体感知。所提出的软网络融合了结构简洁性、全向适应能力与高精度本体感知,在超过100万次使用周期内为机器人领域提供了低成本多功能解决方案,可应用于灵敏抓取、竞争性抓取及基于触觉的几何重建等任务。本研究为软体机器人在自适应抓取、柔性操控与人机交互领域的视觉本体感知机制提供了新的见解。