In the evolving landscape of computer vision (CV) technologies, the automatic detection and interpretation of gender and emotion in images is a critical area of study. This paper investigates social biases in CV models, emphasizing the limitations of traditional evaluation metrics such as precision, recall, and accuracy. These metrics often fall short in capturing the complexities of gender and emotion, which are fluid and culturally nuanced constructs. Our study proposes a sociotechnical framework for evaluating CV models, incorporating both technical performance measures and considerations of social fairness. Using a dataset of 5,570 images related to vaccination and climate change, we empirically compared the performance of various CV models, including traditional models like DeepFace and FER, and generative models like GPT-4 Vision. Our analysis involved manually validating the gender and emotional expressions in a subset of images to serve as benchmarks. Our findings reveal that while GPT-4 Vision outperforms other models in technical accuracy for gender classification, it exhibits discriminatory biases, particularly in response to transgender and non-binary personas. Furthermore, the model's emotion detection skew heavily towards positive emotions, with a notable bias towards associating female images with happiness, especially when prompted by male personas. These findings underscore the necessity of developing more comprehensive evaluation criteria that address both validity and discriminatory biases in CV models. Our proposed framework provides guidelines for researchers to critically assess CV tools, ensuring their application in communication research is both ethical and effective. The significant contribution of this study lies in its emphasis on a sociotechnical approach, advocating for CV technologies that support social good and mitigate biases rather than perpetuate them.
翻译:在计算机视觉(CV)技术不断发展的背景下,图像中性别与情感的自动检测与解读成为一个关键研究领域。本文探讨了CV模型中的社会偏见,强调了传统评估指标(如精确率、召回率和准确率)的局限性。这些指标往往难以捕捉性别与情感的复杂性,因为性别与情感是流动且具有文化细微差别的建构。本研究提出一个用于评估CV模型的社会技术框架,该框架结合了技术性能度量与社会公平性考量。利用一个包含5,570张与疫苗接种和气候变化相关图像的数据集,我们实证比较了多种CV模型的性能,包括DeepFace和FER等传统模型,以及GPT-4 Vision等生成模型。我们的分析涉及手动验证一个图像子集中的性别与情感表达,以此作为基准。研究结果表明,尽管GPT-4 Vision在性别分类的技术准确性上优于其他模型,但它表现出歧视性偏见,尤其是在回应跨性别和非二元性别身份时。此外,该模型的情感检测严重偏向积极情感,并存在显著偏见,倾向于将女性图像与快乐情绪关联,尤其是在由男性身份提示时。这些发现强调了制定更全面的评估标准的必要性,以同时应对CV模型的有效性和歧视性偏见。我们提出的框架为研究者批判性评估CV工具提供了指导原则,确保其在传播研究中的应用既合乎道德又有效。本研究的重要贡献在于强调社会技术方法,倡导发展能够支持社会公益、减轻而非延续偏见的CV技术。