In the evolving landscape of computer vision (CV) technologies, the automatic detection and interpretation of gender and emotion in images is a critical area of study. This paper investigates social biases in CV models, emphasizing the limitations of traditional evaluation metrics such as precision, recall, and accuracy. These metrics often fall short in capturing the complexities of gender and emotion, which are fluid and culturally nuanced constructs. Our study proposes a sociotechnical framework for evaluating CV models, incorporating both technical performance measures and considerations of social fairness. Using a dataset of 5,570 images related to vaccination and climate change, we empirically compared the performance of various CV models, including traditional models like DeepFace and FER, and generative models like GPT-4 Vision. Our analysis involved manually validating the gender and emotional expressions in a subset of images to serve as benchmarks. Our findings reveal that while GPT-4 Vision outperforms other models in technical accuracy for gender classification, it exhibits discriminatory biases, particularly in response to transgender and non-binary personas. Furthermore, the model's emotion detection skew heavily towards positive emotions, with a notable bias towards associating female images with happiness, especially when prompted by male personas. These findings underscore the necessity of developing more comprehensive evaluation criteria that address both validity and discriminatory biases in CV models. Our proposed framework provides guidelines for researchers to critically assess CV tools, ensuring their application in communication research is both ethical and effective. The significant contribution of this study lies in its emphasis on a sociotechnical approach, advocating for CV technologies that support social good and mitigate biases rather than perpetuate them.
翻译:在计算机视觉(CV)技术不断发展的背景下,图像中性别与情感的自动检测与解读已成为关键研究领域。本文探讨了CV模型中的社会偏见,强调传统评估指标(如精确率、召回率和准确率)的局限性。这些指标往往难以捕捉性别与情感的复杂性——二者皆是流动且具有文化细微差异的建构。本研究提出一个评估CV模型的社会技术框架,该框架综合了技术性能度量与社会公平性考量。通过使用包含5,570张涉及疫苗接种与气候变化主题图像的数据集,我们实证比较了多种CV模型的性能,包括DeepFace、FER等传统模型以及GPT-4 Vision等生成模型。我们的分析包含对图像子集中性别与情感表达的人工验证,以建立基准参考。研究发现:尽管GPT-4 Vision在性别分类的技术准确性上优于其他模型,但其表现出歧视性偏见,尤其针对跨性别与非二元性别身份个体。此外,该模型的情感检测结果严重偏向积极情绪,并存在显著偏见——在男性身份提示下尤其倾向于将女性图像与快乐情绪关联。这些发现凸显了建立更全面评估标准的必要性,以同时应对CV模型的有效性与歧视性偏见问题。我们提出的框架为研究者提供了批判性评估CV工具的准则,确保其在传播研究中的应用既符合伦理又切实有效。本研究的重要贡献在于强调社会技术路径,倡导发展能够促进社会公益、缓解而非固化偏见的新型CV技术。