Online power-asymmetric conflicts are prevalent, and most platforms rely on human moderators to conduct moderation currently. Previous studies have been continuously focusing on investigating human moderation biases in different scenarios, while moderation biases under power-asymmetric conflicts remain unexplored. Therefore, we aim to investigate the types of power-related biases human moderators exhibit in power-asymmetric conflict moderation (RQ1) and further explore the influence of AI's suggestions on these biases (RQ2). For this goal, we conducted a mixed design experiment with 50 participants by leveraging the real conflicts between consumers and merchants as a scenario. Results suggest several biases towards supporting the powerful party within these two moderation modes. AI assistance alleviates most biases of human moderation, but also amplifies a few. Based on these results, we propose several insights into future research on human moderation and human-AI collaborative moderation systems for power-asymmetric conflicts.
翻译:在线权力不对称冲突普遍存在,目前大多数平台依赖人工审核员进行内容审核。先前研究持续关注不同场景下人工审核偏见的调查,而权力不对称冲突下的审核偏见尚未得到充分探索。因此,我们旨在探究人工审核员在权力不对称冲突审核中表现出的权力相关偏见类型(研究问题1),并进一步探讨人工智能建议对这些偏见的影响(研究问题2)。为此,我们以消费者与商家之间的真实冲突为场景,对50名参与者进行了混合设计实验。结果表明,在这两种审核模式中存在若干倾向于支持权力方的偏见。人工智能辅助减轻了人工审核的大部分偏见,但也放大了少数偏见。基于这些发现,我们为未来关于权力不对称冲突的人工审核及人机协同审核系统研究提出了若干见解。