Real-world design documents (e.g., posters) are inherently multi-layered, combining decoration, text, and images. Editing them from natural-language instructions requires fine-grained, layer-aware reasoning to identify relevant layers and coordinate modifications. Prior work largely overlooks multi-layer design document editing, focusing instead on single-layer image editing or multi-layer generation, which assume a flat canvas and lack the reasoning needed to determine what and where to modify. To address this gap, we introduce the Multi-Layer Document Editing Agent (MiLDEAgent), a reasoning-based framework that combines an RL-trained multimodal reasoner for layer-wise understanding with an image editor for targeted modifications. To systematically benchmark this setting, we introduce the MiLDEBench, a human-in-the-loop corpus of over 20K design documents paired with diverse editing instructions. The benchmark is complemented by a task-specific evaluation protocol, MiLDEEval, which spans four dimensions including instruction following, layout consistency, aesthetics, and text rendering. Extensive experiments on 14 open-source and 2 closed-source models reveal that existing approaches fail to generalize: open-source models often cannot complete multi-layer document editing tasks, while closed-source models suffer from format violations. In contrast, MiLDEAgent achieves strong layer-aware reasoning and precise editing, significantly outperforming all open-source baselines and attaining performance comparable to closed-source models, thereby establishing the first strong baseline for multi-layer document editing.
翻译:现实世界中的设计文档(如海报)本质上是多层的,融合了装饰、文本和图像等多种元素。根据自然语言指令对其进行编辑,需要进行细粒度的、具备图层感知的推理,以识别相关图层并协调修改。先前的研究大多忽略了多层设计文档编辑问题,主要集中于单层图像编辑或多层生成,这些方法假设画布是平坦的,缺乏确定修改内容和位置所需的推理能力。为填补这一空白,我们引入了多层文档编辑智能体(MiLDEAgent),这是一个基于推理的框架,它结合了一个经过强化学习训练的多模态推理器(用于分层理解)和一个图像编辑器(用于针对性修改)。为了系统性地对这一设定进行基准测试,我们引入了MiLDEBench,这是一个包含超过2万份设计文档及多样化编辑指令的人机协作语料库。该基准测试还辅以一个针对特定任务的评估协议MiLDEEval,该协议涵盖了指令遵循、布局一致性、美观性和文本渲染四个维度。对14个开源模型和2个闭源模型进行的广泛实验表明,现有方法难以泛化:开源模型通常无法完成多层文档编辑任务,而闭源模型则存在格式违规问题。相比之下,MiLDEAgent实现了强大的图层感知推理和精准编辑,显著优于所有开源基线模型,并取得了与闭源模型相当的性能,从而为多层设计文档编辑领域建立了首个强有力的基线。