Robotic knot-tying represents a fundamental challenge in robotics due to the complex interactions between deformable objects and strict topological constraints. We present TWISTED-RL, a framework that improves upon the previous state-of-the-art in demonstration-free knot-tying (TWISTED), which smartly decomposed a single knot-tying problem into manageable subproblems, each addressed by a specialized agent. Our approach replaces TWISTED's single-step inverse model that was learned via supervised learning with a multi-step Reinforcement Learning policy conditioned on abstract topological actions rather than goal states. This change allows more delicate topological state transitions while avoiding costly and ineffective data collection protocols, thus enabling better generalization across diverse knot configurations. Experimental results demonstrate that TWISTED-RL manages to solve previously unattainable knots of higher complexity, including commonly used knots such as the Figure-8 and the Overhand. Furthermore, the increase in success rates and drop in planning time establishes TWISTED-RL as the new state-of-the-art in robotic knot-tying without human demonstrations.
翻译:机器人打结因涉及可变形物体间的复杂交互与严格的拓扑约束,成为机器人学中的一项基础性挑战。本文提出TWISTED-RL框架,该框架改进了此前无需演示的打结方法TWISTED(其通过智能分解将单个打结问题转化为可处理的子问题,并由专用代理分别处理)。我们的方法将TWISTED中通过监督学习训练的单步逆模型,替换为基于抽象拓扑动作(而非目标状态)的多步强化学习策略。这一改进实现了更精细的拓扑状态转移,同时避免了高成本且低效的数据收集流程,从而提升了方法对不同打结构型的泛化能力。实验结果表明,TWISTED-RL能够完成此前无法实现的更高复杂度绳结,包括常用的八字结与单结。此外,成功率的提升与规划时间的下降,共同确立了TWISTED-RL在无需人类演示的机器人打结领域的新标杆地位。