We introduce SLayR, Scene Layout Generation with Rectified flow. State-of-the-art text-to-image models achieve impressive results. However, they generate images end-to-end, exposing no fine-grained control over the process. SLayR presents a novel transformer-based rectified flow model for layout generation over a token space that can be decoded into bounding boxes and corresponding labels, which can then be transformed into images using existing models. We show that established metrics for generated images are inconclusive for evaluating their underlying scene layout, and introduce a new benchmark suite, including a carefully designed repeatable human-evaluation procedure that assesses the plausibility and variety of generated layouts. In contrast to previous works, which perform well in either high variety or plausibility, we show that our approach performs well on both of these axes at the same time. It is also at least 5x times smaller in the number of parameters and 37% faster than the baselines. Our complete text-to-image pipeline demonstrates the added benefits of an interpretable and editable intermediate representation.
翻译:我们提出了SLayR(Scene Layout Generation with Rectified flow),一种基于修正流的场景布局生成方法。当前最先进的文生图模型取得了令人瞩目的成果,但这些模型采用端到端的图像生成方式,无法对生成过程进行细粒度控制。SLayR提出了一种基于Transformer的修正流模型,用于在可解码为边界框及其对应标签的标记空间中进行布局生成,生成的布局随后可通过现有模型转换为图像。我们发现,现有的生成图像评估指标无法有效评估其底层场景布局,因此引入了一套新的基准测试集,其中包括一个精心设计的可重复人工评估流程,用于评估生成布局的合理性与多样性。与先前要么在多样性、要么在合理性方面表现突出的工作不同,我们的方法能够在这两个维度上同时取得优异表现。与基线模型相比,我们的模型参数量至少减少5倍,推理速度提升37%。我们完整的文生图流程证明了可解释、可编辑的中间表征所带来的额外优势。