This study analyzed images generated by three popular generative artificial intelligence (AI) tools - Midjourney, Stable Diffusion, and DALLE 2 - representing various occupations to investigate potential bias in AI generators. Our analysis revealed two overarching areas of concern in these AI generators, including (1) systematic gender and racial biases, and (2) subtle biases in facial expressions and appearances. Firstly, we found that all three AI generators exhibited bias against women and African Americans. Moreover, we found that the evident gender and racial biases uncovered in our analysis were even more pronounced than the status quo when compared to labor force statistics or Google images, intensifying the harmful biases we are actively striving to rectify in our society. Secondly, our study uncovered more nuanced prejudices in the portrayal of emotions and appearances. For example, women were depicted as younger with more smiles and happiness, while men were depicted as older with more neutral expressions and anger, posing a risk that generative AI models may unintentionally depict women as more submissive and less competent than men. Such nuanced biases, by their less overt nature, might be more problematic as they can permeate perceptions unconsciously and may be more difficult to rectify. Although the extent of bias varied depending on the model, the direction of bias remained consistent in both commercial and open-source AI generators. As these tools become commonplace, our study highlights the urgency to identify and mitigate various biases in generative AI, reinforcing the commitment to ensuring that AI technologies benefit all of humanity for a more inclusive future.
翻译:本研究分析了由三种主流生成式人工智能工具——Midjourney、Stable Diffusion和DALLE 2——生成的各类职业图像,旨在探究AI生成器中潜在的偏见问题。我们的分析揭示了这些AI生成器存在两大令人担忧的领域,包括:(1)系统性的性别与种族偏见,以及(2)面部表情与外貌中的隐性偏见。首先,我们发现所有三种AI生成器均表现出对女性和非洲裔美国人的偏见。此外,与劳动力统计数据或谷歌图像相比,我们分析中揭示的明显性别与种族偏见甚至比现状更为显著,这加剧了我们社会正积极努力纠正的有害偏见。其次,我们的研究还发现了情绪和外表描绘中更为微妙的偏见。例如,女性被描绘得更年轻、面带更多微笑和幸福感,而男性则被描绘得更年长、表情更中性且带有愤怒,这可能导致生成式AI模型无意中将女性刻画为比男性更顺从、能力更差。这类细微偏见因其隐蔽性,可能通过潜意识渗透认知而更具问题性,且更难纠正。尽管偏见的程度因模型而异,但偏见的方向在商业和开源AI生成器中保持一致。随着这些工具的普及,我们的研究强调了识别和减轻生成式AI中各类偏见的紧迫性,以强化确保AI技术惠及全人类、构建更包容未来的承诺。