Human-annotated content is often used to train machine learning (ML) models. However, recently, language and multi-modal foundational models have been used to replace and scale-up human annotator's efforts. This study compares human-generated and ML-generated annotations of images representing diverse socio-economic contexts. We aim to understand differences in perception and identify potential biases in content interpretation. Our dataset comprises images of people from various geographical regions and income levels, covering various daily activities and home environments. We compare human and ML-generated annotations semantically and evaluate their impact on predictive models. Our results show highest similarity between ML captions and human labels from a low-level perspective, i.e., types of words that appear and sentence structures, but all three annotations are alike in how similar or dissimilar they perceive images across different regions. Additionally, ML Captions resulted in best overall region classification performance, while ML Objects and ML Captions performed best overall for income regression. The varying performance of annotation sets highlights the notion that all annotations are important, and that human-generated annotations are yet to be replaceable.
翻译:人类标注的内容常被用于训练机器学习模型。然而,近期语言和多模态基础模型已被用于替代并扩大人类标注工作的规模。本研究比较了代表不同社会经济背景的图像由人类生成和机器学习生成的标注。我们旨在理解感知差异并识别内容解释中潜在的偏差。我们的数据集包含来自不同地理区域和收入水平的人物图像,涵盖多种日常活动与家居环境。我们从语义角度比较人类与机器学习生成的标注,并评估它们对预测模型的影响。结果表明,从低级视角(即出现的词汇类型和句子结构)来看,机器学习生成的描述与人类标签的相似度最高,但三种标注在感知不同区域图像的相似性或差异性方面表现一致。此外,机器学习描述在区域分类任务中取得了最佳整体性能,而机器学习对象检测与机器学习描述在收入回归任务中表现最优。不同标注集的性能差异表明所有标注均具重要性,且人类生成的标注目前仍不可被完全替代。