Embodied robotic systems increasingly rely on large language model (LLM)-based agents to support high-level reasoning, planning, and decision-making during interactions with the environment. However, invoking LLM reasoning introduces substantial computational latency and resource overhead, which can interrupt action execution and reduce system reliability. Excessive reasoning may delay actions, while insufficient reasoning often leads to incorrect decisions and task failures. This raises a fundamental question for embodied agents: when should the agent reason, and when should it act? In this work, we propose RARRL (Resource-Aware Reasoning via Reinforcement Learning), a hierarchical framework for resource-aware orchestration of embodied agents. Rather than learning low-level control policies, RARRL learns a high-level orchestration policy that operates at the agent's decision-making layer. This policy enables the agent to adaptively determine whether to invoke reasoning, which reasoning role to employ, and how much computational budget to allocate based on current observations, execution history, and remaining resources. Extensive experiments, including evaluations with empirical latency profiles derived from the ALFRED benchmark, show that RARRL consistently improves task success rates while reducing execution latency and enhancing robustness compared with fixed or heuristic reasoning strategies. These results demonstrate that adaptive reasoning control is essential for building reliable and efficient embodied robotic agents.
翻译:具身机器人系统日益依赖基于大语言模型(LLM)的智能体,以支持其在与环境交互过程中的高层推理、规划与决策。然而,调用LLM推理会引入显著的计算延迟与资源开销,可能中断动作执行并降低系统可靠性。过度推理会延迟行动,而推理不足则常导致错误决策与任务失败。这为具身智能体提出了一个根本性问题:智能体应在何时进行推理,又应在何时采取行动?本研究提出RARRL(基于强化学习的资源感知推理),一种用于具身智能体资源感知编排的分层框架。RARRL并非学习底层控制策略,而是学习在智能体决策层运行的高层编排策略。该策略使智能体能够根据当前观测、执行历史与剩余资源,自适应地决定是否调用推理、采用何种推理角色以及分配多少计算预算。大量实验(包括基于ALFRED基准衍生的实际延迟配置文件进行的评估)表明,相较于固定或启发式推理策略,RARRL在降低执行延迟、增强鲁棒性的同时,持续提升了任务成功率。这些结果证明,自适应推理控制对于构建可靠高效的具身机器人智能体至关重要。