Completing complex tasks in unpredictable settings like home kitchens challenges robotic systems. These challenges include interpreting high-level human commands, such as "make me a hot beverage" and performing actions like pouring a precise amount of water into a moving mug. To address these challenges, we present a novel framework that combines Large Language Models (LLMs), a curated Knowledge Base, and Integrated Force and Visual Feedback (IFVF). Our approach interprets abstract instructions, performs long-horizon tasks, and handles various uncertainties. It utilises GPT-4 to analyse the user's query and surroundings, then generates code that accesses a curated database of functions during execution. It translates abstract instructions into actionable steps. Each step involves generating custom code by employing retrieval-augmented generalisation to pull IFVF-relevant examples from the Knowledge Base. IFVF allows the robot to respond to noise and disturbances during execution. We use coffee making and plate decoration to demonstrate our approach, including components ranging from pouring to drawer opening, each benefiting from distinct feedback types and methods. This novel advancement marks significant progress toward a scalable, efficient robotic framework for completing complex tasks in uncertain environments. Our findings are illustrated in an accompanying video and supported by an open-source GitHub repository (released upon paper acceptance).
翻译:在家庭厨房等不可预测环境中完成复杂任务对机器人系统构成挑战。这些挑战包括解释高级人类指令(如“为我制作一杯热饮”)以及执行诸如将精确水量倒入移动杯子等动作。为应对这些挑战,我们提出了一种结合大型语言模型(LLMs)、精选知识库以及集成力觉与视觉反馈(IFVF)的新型框架。该方法能够解释抽象指令、执行长时程任务并处理各类不确定性。它利用GPT-4分析用户查询与环境信息,随后生成可在执行过程中访问精选函数数据库的代码,从而将抽象指令转化为可执行步骤。每个步骤都通过检索增强泛化技术从知识库中提取IFVF相关示例来生成定制代码。IFVF使机器人能够在执行过程中响应噪声与干扰。我们以咖啡制作和餐盘装饰为例展示该框架,涵盖从倾倒液体到抽屉开启等多种组件,每个组件均受益于不同类型的反馈机制与方法。这一创新进展标志着面向不确定环境中完成复杂任务的可扩展高效机器人框架取得重大进展。我们的研究成果通过附带的视频进行演示,并得到开源GitHub仓库的支持(将在论文录用后公开)。