Comprehending natural language instructions is a charming property for both 2D and 3D layout synthesis systems. Existing methods implicitly model object joint distributions and express object relations, hindering generation's controllability. We introduce InstructLayout, a novel generative framework that integrates a semantic graph prior and a layout decoder to improve controllability and fidelity for 2D and 3D layout synthesis. The proposed semantic graph prior learns layout appearances and object distributions simultaneously, demonstrating versatility across various downstream tasks in a zero-shot manner. To facilitate the benchmarking for text-driven 2D and 3D scene synthesis, we respectively curate two high-quality datasets of layout-instruction pairs from public Internet resources with large language and multimodal models. Extensive experimental results reveal that the proposed method outperforms existing state-of-the-art approaches by a large margin in both 2D and 3D layout synthesis tasks. Thorough ablation studies confirm the efficacy of crucial design components.
翻译:理解自然语言指令是二维与三维布局合成系统极具吸引力的特性。现有方法隐式建模对象联合分布并表达对象关系,限制了生成过程的可控性。本文提出InstructLayout,一种集成语义图先验与布局解码器的新型生成框架,旨在提升二维与三维布局合成的可控性与保真度。所提出的语义图先验能够同时学习布局外观与对象分布,并以零样本方式展现其在多种下游任务中的通用性。为构建文本驱动二维与三维场景合成的基准测试,我们分别利用大语言模型与多模态模型从公开互联网资源中构建了两个高质量的布局-指令配对数据集。大量实验结果表明,所提方法在二维与三维布局合成任务中均显著优于现有最先进方法。系统的消融研究证实了关键设计组件的有效性。