Accurate pre-contact grasp force selection is critical for safe and reliable robotic manipulation. Adaptive controllers regulate force after contact but still require a reasonable initial estimate. Starting a grasp with too little force requires reactive adjustment, while starting a grasp with too high a force risks damaging fragile objects. This trade-off is particularly challenging for compliant grippers, whose contact mechanics are difficult to model analytically. We propose Exp-Force, an experience-conditioned framework that predicts the minimum feasible grasping force from a single RGB image. The method retrieves a small set of relevant prior grasping experiences and conditions a vision-language model on these examples for in-context inference, without analytic contact models or manually designed heuristics. On 129 object instances, ExpForce achieves a best-case MAE of 0.43 N, reducing error by 72% over zero-shot inference. In real-world tests on 30 unseen objects, it improves appropriate force selection rate from 63% to 87%. These results demonstrate that Exp-Force enables reliable and generalizable pre-grasp force selection by leveraging prior interaction experiences. http://expforcesubmission.github.io/Exp-Force-Website/
翻译:精确的预接触抓取力选择对于安全可靠的机器人操作至关重要。自适应控制器在接触后调节力,但仍需要一个合理的初始估计。以过小的力启动抓取需要反应性调整,而以过高的力启动抓取则可能损坏易碎物体。这种权衡对于顺应性夹持器尤其具有挑战性,因为其接触力学难以通过解析方法建模。我们提出了Exp-Force,一种经验条件化框架,可从单张RGB图像预测最小可行抓取力。该方法检索一小组相关的先验抓取经验,并基于这些示例对视觉语言模型进行上下文推理条件化,无需解析接触模型或手动设计的启发式规则。在129个物体实例上,Exp-Force实现了0.43 N的最佳情况平均绝对误差,相较于零样本推理将误差降低了72%。在30个未见物体的真实世界测试中,它将合适力选择率从63%提升至87%。这些结果表明,Exp-Force通过利用先前的交互经验,实现了可靠且可泛化的预抓取力选择。http://expforcesubmission.github.io/Exp-Force-Website/