This paper examines algorithmic lookism-the systematic preferential treatment based on physical appearance-in text-to-image (T2I) generative AI and a downstream gender classification task. Through the analysis of 26,400 synthetic faces created with Stable Diffusion 2.1 and 3.5 Medium, we demonstrate how generative AI models systematically associate facial attractiveness with positive attributes and vice-versa, mirroring socially constructed biases rather than evidence-based correlations. Furthermore, we find significant gender bias in three gender classification algorithms depending on the attributes of the input faces. Our findings reveal three critical harms: (1) the systematic encoding of attractiveness-positive attribute associations in T2I models; (2) gender disparities in classification systems, where women's faces, particularly those generated with negative attributes, suffer substantially higher misclassification rates than men's; and (3) intensifying aesthetic constraints in newer models through age homogenization, gendered exposure patterns, and geographic reductionism. These convergent patterns reveal algorithmic lookism as systematic infrastructure operating across AI vision systems, compounding existing inequalities through both representation and recognition. Disclaimer: This work includes visual and textual content that reflects stereotypical associations between physical appearance and socially constructed attributes, including gender, race, and traits associated with social desirability. Any such associations found in this study emerge from the biases embedded in generative AI systems-not from empirical truths or the authors' views.
翻译:本文研究了算法外貌主义——基于外貌的系统性优待现象——在文本到图像生成式人工智能及下游性别分类任务中的表现。通过对Stable Diffusion 2.1和3.5 Medium生成的26,400张合成人脸进行分析,我们揭示了生成式AI模型如何系统地将面部吸引力与积极属性相关联(反之亦然),这种关联反映的是社会建构的偏见而非基于证据的相关性。此外,我们发现三种性别分类算法存在显著的性别偏见,其表现取决于输入面部的属性特征。我们的研究揭示了三大关键危害:(1)文本到图像模型中吸引力-积极属性关联的系统性编码;(2)分类系统中的性别差异,其中女性面孔——尤其是带有负面属性生成的面孔——误分类率显著高于男性面孔;(3)通过年龄同质化、性别化暴露模式与地理简化主义,新版模型中美學约束的强化趋势。这些趋同模式表明,算法外貌主义是贯穿AI视觉系统的系统性基础设施,通过表征与识别的双重机制加剧了现存的不平等。免责声明:本研究包含反映外貌与社会建构属性(包括性别、种族及社会期望特质)间刻板关联的视觉与文本内容。本研究中发现的任何此类关联均源于生成式AI系统中嵌入的偏见,而非经验事实或作者观点。