We introduce Talk2Move, a reinforcement learning (RL) based diffusion framework for text-instructed spatial transformation of objects within scenes. Spatially manipulating objects in a scene through natural language poses a challenge for multimodal generation systems. While existing text-based manipulation methods can adjust appearance or style, they struggle to perform object-level geometric transformations-such as translating, rotating, or resizing objects-due to scarce paired supervision and pixel-level optimization limits. Talk2Move employs Group Relative Policy Optimization (GRPO) to explore geometric actions through diverse rollouts generated from input images and lightweight textual variations, removing the need for costly paired data. A spatial reward guided model aligns geometric transformations with linguistic description, while off-policy step evaluation and active step sampling improve learning efficiency by focusing on informative transformation stages. Furthermore, we design object-centric spatial rewards that evaluate displacement, rotation, and scaling behaviors directly, enabling interpretable and coherent transformations. Experiments on curated benchmarks demonstrate that Talk2Move achieves precise, consistent, and semantically faithful object transformations, outperforming existing text-guided editing approaches in both spatial accuracy and scene coherence.
翻译:本文提出Talk2Move,一种基于强化学习(RL)的扩散框架,用于实现场景内物体的文本指令空间变换。通过自然语言对场景中的物体进行空间操控是多模态生成系统面临的挑战。现有基于文本的操控方法虽能调整外观或风格,但由于缺乏成对监督数据及像素级优化的局限,难以执行物体级几何变换——例如平移、旋转或缩放物体。Talk2Move采用群体相对策略优化(GRPO),通过输入图像与轻量级文本变体生成多样化轨迹来探索几何动作,无需昂贵的成对数据。空间奖励引导模型将几何变换与语言描述对齐,同时离策略步长评估与主动步长采样通过聚焦于信息丰富的变换阶段提升学习效率。此外,我们设计了以物体为中心的空间奖励函数,直接评估位移、旋转与缩放行为,从而实现可解释且连贯的变换。在精心构建的基准测试上的实验表明,Talk2Move能够实现精确、一致且语义保真的物体变换,在空间准确性与场景连贯性方面均优于现有文本引导编辑方法。