Recent breakthroughs in generative simulation have harnessed Large Language Models (LLMs) to generate diverse robotic task curricula, yet these open-loop paradigms frequently produce linguistically coherent but physically infeasible goals, stemming from ungrounded task specifications or misaligned objective formulations. To address this critical limitation, we propose FATE (Feasibility-Aware Task gEneration), a closed-loop, self-correcting framework that reimagines task generation as an iterative validation-and-refinement process. Unlike conventional methods that decouple generation and verification into discrete stages, FATE embeds a generalist embodied agent directly into the generation loop to proactively guarantee the physical groundedness of the resulting curriculum. FATE instantiates a sequential auditing pipeline: it first validates static scene attributes (e.g., object affordances, layout compatibility) and subsequently verifies execution feasibility via simulated embodied interaction. Critical to its performance, upon detecting an infeasible task, FATE deploys an active repair module that autonomously adapts scene configurations or policy specifications, converting unworkable proposals into physically valid task instances. Extensive experiments validate that FATE generates semantically diverse, physically grounded task curricula while achieving a substantial reduction in execution failure rates relative to state-of-the-art generative baselines.
翻译:近年来,生成式仿真领域取得突破性进展,利用大语言模型(LLM)生成多样化的机器人任务课程。然而,这类开环范式常因任务规范缺乏物理基础或目标函数失配,产生语言连贯但物理不可行的目标。为克服这一关键局限,本文提出FATE(可行性感知任务生成框架),一种将任务生成重新构想为迭代验证与优化过程的闭环自校正框架。与传统方法将生成与验证解耦为离散阶段不同,FATE将通用具身智能体直接嵌入生成循环,主动保障生成课程的物理可落地性。FATE实例化了一套顺序审计流程:首先验证静态场景属性(如物体可供性、布局兼容性),随后通过仿真具身交互验证执行可行性。其性能的关键在于,当检测到不可行任务时,FATE会启动主动修复模块,自主调整场景配置或策略规范,将不可行提案转化为物理有效的任务实例。大量实验验证表明,FATE能生成语义多样且物理可落地的任务课程,同时相较于最先进的生成基线方法,其执行失败率显著降低。