Achieving generalizable bimanual manipulation requires systems that can learn efficiently from minimal human input while adapting to real-world uncertainties and diverse embodiments. Existing approaches face a dilemma: imitation policy learning demands extensive demonstrations to cover task variations, while modular methods often lack flexibility in dynamic scenes. We introduce VLBiMan, a framework that derives reusable skills from a single human example through task-aware decomposition, preserving invariant primitives as anchors while dynamically adapting adjustable components via vision-language grounding. This adaptation mechanism resolves scene ambiguities caused by background changes, object repositioning, or visual clutter without policy retraining, leveraging semantic parsing and geometric feasibility constraints. Moreover, the system inherits human-like hybrid control capabilities, enabling mixed synchronous and asynchronous use of both arms. Extensive experiments validate VLBiMan across tool-use and multi-object tasks, demonstrating: (1) a drastic reduction in demonstration requirements compared to imitation baselines, (2) compositional generalization through atomic skill splicing for long-horizon tasks, (3) robustness to novel but semantically similar objects and external disturbances, and (4) strong cross-embodiment transfer, showing that skills learned from human demonstrations can be instantiated on different robotic platforms without retraining. By bridging human priors with vision-language anchored adaptation, our work takes a step toward practical and versatile dual-arm manipulation in unstructured settings.
翻译:实现通用化的双臂操作需要系统能够从最少的人类输入中高效学习,同时适应现实世界的不确定性和多样化的实体形态。现有方法面临一个困境:模仿策略学习需要大量演示以覆盖任务变化,而模块化方法通常在动态场景中缺乏灵活性。我们提出了VLBiMan框架,该框架通过任务感知分解从单次人类示例中提取可重用技能,将不变基元作为锚点保留,同时通过视觉-语言接地动态调整可适配组件。这种适应机制利用语义解析和几何可行性约束,解决了由背景变化、物体重定位或视觉杂乱引起的场景歧义,无需策略重新训练。此外,该系统继承了类人混合控制能力,支持双臂同步与异步混合使用。大量实验验证了VLBiMan在工具使用和多物体任务中的有效性,结果表明:(1)与模仿基线相比,演示需求大幅减少;(2)通过原子技能拼接实现长时程任务的组合泛化;(3)对语义相似的新物体和外部干扰具有鲁棒性;(4)强大的跨实体迁移能力,即从人类演示中学习的技能可在不同机器人平台上实例化而无需重新训练。通过将人类先验与视觉-语言锚定适应相结合,我们的工作朝着在非结构化环境中实现实用且通用的双臂操作迈出了一步。