Generative models have become a powerful tool for image editing tasks, including object insertion. However, these methods often lack spatial awareness, generating objects with unrealistic locations and scales, or unintentionally altering the scene background. A key challenge lies in maintaining visual coherence, which requires both a geometrically suitable object location and a high-quality image edit. In this paper, we focus on the former, creating a location model dedicated to identifying realistic object locations. Specifically, we train an autoregressive model that generates bounding box coordinates, conditioned on the background image and the desired object class. This formulation allows to effectively handle sparse placement annotations and to incorporate implausible locations into a preference dataset by performing direct preference optimization. Our extensive experiments demonstrate that our generative location model, when paired with an inpainting method, substantially outperforms state-of-the-art instruction-tuned models and location modeling baselines in object insertion tasks, delivering accurate and visually coherent results.
翻译:生成模型已成为图像编辑任务(包括物体插入)的强大工具。然而,这些方法通常缺乏空间感知能力,生成的物体位置和尺度不真实,或无意中改变了场景背景。一个关键挑战在于保持视觉连贯性,这既需要几何上合适的物体位置,也需要高质量的图像编辑。本文聚焦于前者,构建了一个专门用于识别真实物体位置的位置模型。具体而言,我们训练了一个自回归模型,该模型以背景图像和期望的物体类别为条件,生成边界框坐标。该框架能够有效处理稀疏的放置标注,并通过执行直接偏好优化将不合理的位置纳入偏好数据集。我们的大量实验表明,当我们的生成式位置模型与修复方法结合时,在物体插入任务中显著优于最先进的指令调优模型和位置建模基线,能够提供准确且视觉连贯的结果。