Large-scale robot learning has recently shown promise for enabling robots to perform complex tasks by integrating perception, control, and language understanding. Yet, it struggles with long-horizon, contact-rich manipulation such as deformable object handling, where demonstration quality is inconsistent. Reward modeling offers a natural solution: by providing grounded progress signals, it transforms noisy demonstrations into stable supervision that generalizes across diverse trajectories. We introduce a stage-aware, video-based reward modeling framework that jointly predicts high-level task stages and fine-grained progress. Reward labels are automatically derived from natural language subtask annotations, ensuring consistent progress estimation across variable-length demonstrations. This design overcomes frame-index labeling, which fails in variable-duration tasks like folding a T-shirt. Our reward model demonstrates robustness to variability, generalization to out-of-distribution settings, and strong utility for policy training. Building on it, we propose Reward-Aligned Behavior Cloning (RA-BC), which filters high-quality data and reweights samples by reward. Experiments show the reward model alone outperforms baselines on validation and real robot rollouts. Integrated into RA-BC, our approach achieves 83\% success on folding T-shirts from the flattened state and 67\% from the crumpled state -- far surpassing vanilla behavior cloning, which attains only 8\% and 0\% success. Overall, our results highlight reward modeling as a key enabler for scalable, annotation-efficient, and robust imitation learning in long-horizon manipulation.
翻译:大规模机器人学习近期展现出通过整合感知、控制与语言理解使机器人执行复杂任务的潜力。然而,其在处理长时程、高接触度的操作(如可变形物体操控)时仍面临挑战,此类任务的演示质量往往不一致。奖励建模提供了一种自然的解决方案:通过提供基于实际进展的信号,它将噪声演示转化为稳定的监督信号,从而能够泛化至多样化的轨迹。我们提出了一种基于视频的阶段感知奖励建模框架,该框架联合预测高层任务阶段与细粒度进展。奖励标签自动从自然语言子任务标注中推导,确保在不同长度演示中实现一致的进展估计。这一设计克服了帧索引标注在可变时长任务(如折叠T恤)中的失效问题。我们的奖励模型展现出对任务变异的鲁棒性、对分布外场景的泛化能力,以及对策略训练的强大实用性。在此基础上,我们提出了奖励对齐行为克隆(RA-BC),该方法通过奖励过滤高质量数据并重新加权样本。实验表明,仅使用奖励模型在验证和真实机器人部署中已优于基线方法。集成至RA-BC后,我们的方法在从平整状态折叠T恤的任务中达到83%的成功率,在从褶皱状态折叠时达到67%——远超仅获得8%和0%成功率的原始行为克隆。总体而言,我们的研究结果凸显了奖励建模作为实现长时程操作中可扩展、标注高效且鲁棒的模仿学习的关键使能技术。