Demographic bias in text-to-image (T2I) generation is well studied, yet demographic-conditioned failures in instruction-guided image-to-image (I2I) editing remain underexplored. We examine whether identical edit instructions yield systematically different outcomes across subject demographics in open-weight I2I editors. We formalize two failure modes: Soft Erasure, where edits are silently weakened or ignored in the output image, and Stereotype Replacement, where edits introduce unrequested, stereotype-consistent attributes. We introduce a controlled benchmark that probes demographic-conditioned behavior by generating and editing portraits conditioned on race, gender, and age using a diagnostic prompt set, and evaluate multiple editors with vision-language model (VLM) scoring and human evaluation. Our analysis shows that identity preservation failures are pervasive, demographically uneven, and shaped by implicit social priors, including occupation-driven gender inference. Finally, we demonstrate that a prompt-level identity constraint, without model updates, can substantially reduce demographic change for minority groups while leaving majority-group portraits largely unchanged, revealing asymmetric identity priors in current editors. Together, our findings establish identity preservation as a central and demographically uneven failure mode in I2I editing and motivate demographic-robust editing systems. Project page: https://seochan99.github.io/i2i-demographic-bias
翻译:文本到图像(T2I)生成中的人口统计偏差已得到充分研究,然而,指令引导的图像到图像(I2I)编辑中的人口统计条件化失败仍未得到充分探索。我们研究了在开放权重的I2I编辑器中,相同的编辑指令是否会在不同主体人口统计特征上产生系统性不同的结果。我们形式化了两种失败模式:软擦除(Soft Erasure),即编辑在输出图像中被无声地削弱或忽略;以及刻板印象替换(Stereotype Replacement),即编辑引入了未请求的、符合刻板印象的属性。我们引入了一个受控基准,通过使用一组诊断提示,基于种族、性别和年龄生成和编辑人像,来探测人口统计条件化行为,并使用视觉语言模型(VLM)评分和人工评估对多个编辑器进行评估。我们的分析表明,身份保持失败是普遍存在的、人口统计上不均衡的,并且受到隐性社会先验(包括职业驱动的性别推断)的影响。最后,我们证明,无需模型更新,仅通过提示级别的身份约束,就可以显著减少少数群体的人口统计特征变化,同时使多数群体的人像基本保持不变,这揭示了当前编辑器中存在不对称的身份先验。总之,我们的研究确立了身份保持是I2I编辑中一个核心且人口统计上不均衡的失败模式,并推动了人口统计鲁棒编辑系统的开发。项目页面:https://seochan99.github.io/i2i-demographic-bias