Robot evaluations in language-guided, real world settings are time-consuming and often sample only a small space of potential instructions across complex scenes. In this work, we introduce contrast sets for robotics as an approach to make small, but specific, perturbations to otherwise independent, identically distributed (i.i.d.) test instances. We investigate the relationship between experimenter effort to carry out an evaluation and the resulting estimated test performance as well as the insights that can be drawn from performance on perturbed instances. We use contrast sets to characterize policies at reduced experimenter effort in both a simulated manipulation task and a physical robot vision-and-language navigation task. We encourage the use of contrast set evaluations as a more informative alternative to small scale, i.i.d. demonstrations on physical robots, and as a scalable alternative to industry-scale real world evaluations.
翻译:在语言引导的真实世界场景中进行机器人评估耗时且通常仅能在复杂场景中对潜在指令空间进行小范围采样。本文提出将对比集方法引入机器人学领域,通过对原本独立同分布的测试实例施加微小但有针对性的扰动来实现评估。我们研究了实验者执行评估所需工作量与所得测试性能估计值之间的关系,并探讨了从扰动实例性能中可获得的洞见。我们通过在模拟操作任务和物理机器人视觉语言导航任务中应用对比集,以较低实验成本实现了对策略的全面表征。我们建议采用对比集评估方法,将其作为物理机器人小规模独立同分布演示的更具信息量的替代方案,同时作为工业级真实世界评估的可扩展替代方案。