As artificial agents increasingly integrate into professional environments, fundamental questions have emerged about how societal biases influence human-robot selection decisions. We conducted two comprehensive experiments (N = 1,038) examining how occupational contexts and stereotype activation shape robotic agent choices across construction, healthcare, educational, and athletic domains. Participants made selections from artificial agents that varied systematically in skin tone and anthropomorphic characteristics. Our study revealed distinct context-dependent patterns. Healthcare and educational scenarios demonstrated strong favoritism toward lighter-skinned artificial agents, while construction and athletic contexts showed greater acceptance of darker-toned alternatives. Participant race was associated with systematic differences in selection patterns across professional domains. The second experiment demonstrated that exposure to human professionals from specific racial backgrounds systematically shifted later robotic agent preferences in stereotype-consistent directions. These findings show that occupational biases and color-based discrimination transfer directly from human-human to human-robot evaluation contexts. The results highlight mechanisms through which robotic deployment may unintentionally perpetuate existing social inequalities.
翻译:随着人工智能体日益融入专业环境,关于社会偏见如何影响人机选择决策的基础性问题逐渐凸显。我们通过两项综合实验(N = 1,038)研究了职业情境与刻板印象激活如何影响建筑、医疗、教育及体育领域的机器人选择。参与者从肤色和拟人化特征系统化差异的人工智能体中进行选择。研究发现明显的环境依赖模式:医疗与教育场景显示出对浅肤色人工智能体的强烈偏好,而建筑与体育情境则对深肤色选项表现出更高接受度。参与者种族与各专业领域的选择模式存在系统性差异。第二项实验表明,接触特定种族背景的人类专业人员会系统性地将后续机器人偏好推向与刻板印象一致的方向。这些发现表明职业偏见与基于肤色的歧视会直接从人际评估场景迁移至人机评估场景。研究结果揭示了机器人部署可能无意中延续现有社会不平等现象的内在机制。