Text-to-image (T2I) diffusion models are widely used in image editing due to their powerful generative capabilities. However, achieving fine-grained control over specific object attributes, such as color and material, remains a considerable challenge. Existing methods often fail to accurately modify these attributes or compromise structural integrity and overall image consistency. To fill this gap, we introduce Structure Preservation and Attribute Amplification (SPAA), a novel training-free framework that enables precise generation of color and material attributes for the same object by intelligently manipulating self-attention maps and cross-attention values within diffusion models. Building on SPAA, we integrate multi-modal large language models (MLLMs) to automate data curation and instruction generation. Leveraging this object attribute data collection engine, we construct the Attribute Dataset, encompassing a comprehensive range of colors and materials across diverse object categories. Using this generated dataset, we propose InstructAttribute, an instruction-tuned model that enables fine-grained and object-level attribute editing through natural language prompts. This capability holds significant practical implications for diverse fields, from accelerating product design and e-commerce visualization to enhancing virtual try-on experiences. Extensive experiments demonstrate that InstructAttribute outperforms existing instruction-based baselines, achieving a superior balance between attribute modification accuracy and structural preservation.
翻译:文本到图像(T2I)扩散模型因其强大的生成能力而被广泛应用于图像编辑。然而,实现对特定物体属性(如颜色和材质)的细粒度控制仍然是一个相当大的挑战。现有方法通常无法准确修改这些属性,或者会损害结构完整性和整体图像一致性。为填补这一空白,我们提出了结构保持与属性增强(SPAA),这是一种无需训练的新颖框架,通过智能操控扩散模型中的自注意力图与交叉注意力值,能够为同一物体精确生成颜色和材质属性。基于SPAA,我们整合了多模态大语言模型(MLLMs)以实现数据自动整理与指令生成。利用这一物体属性数据收集引擎,我们构建了属性数据集,涵盖了多样化物体类别中全面的颜色与材质范围。使用此生成的数据集,我们提出了InstructAttribute,这是一个经过指令微调的模型,能够通过自然语言提示实现细粒度的、物体级别的属性编辑。该能力对于从加速产品设计、电子商务可视化到增强虚拟试穿体验等多个领域具有重要的实际意义。大量实验表明,InstructAttribute优于现有的基于指令的基线方法,在属性修改准确性与结构保持之间实现了更优的平衡。