Social media platforms like Facebook and Reddit host thousands of user-governed online communities. These platforms sanction communities that frequently violate platform policies; however, public perceptions of such sanctions remain unclear. In a pre-registered survey conducted in the US, I explore user perceptions of content moderation for communities that frequently feature hate speech, violent content, and sexually explicit content. Two community-wide moderation interventions are tested: (1) community bans, where all community posts are removed, and (2) community warning labels, where an interstitial warning label precedes access. I examine how third-person effects and support for free speech influence user approval of these interventions. My regression analyses show that presumed effects on others is a significant predictor of backing for both interventions, while free speech beliefs significantly influence participants' inclination for using warning labels. Analyzing the open-ended responses, I find that community-wide bans are often perceived as too coarse and users instead value sanctions in proportion to the severity and type of infractions. I report on concerns that norm-violating communities could reinforce inappropriate behaviors and show how users' choice of sanctions is influenced by their perceived effectiveness. I discuss the implications of these results for HCI research on online harms and content moderation.
翻译:像Facebook和Reddit等社交媒体平台承载着数以千计由用户自治的在线社群。这些平台会对频繁违反平台政策的社群实施制裁,但公众对此类制裁的看法尚不明确。我在美国开展了一项预先注册的调查研究,探讨用户对频繁出现仇恨言论、暴力内容及露骨色情内容的社群的内容审核看法。研究测试了两种社群级审核干预措施:(1)社群禁令,即移除社群所有帖子;(2)社群警告标签,即在访问前显示插页式警告。我考察了第三方效应与言论自由支持度如何影响用户对上述干预措施的赞同程度。回归分析显示,对他人受影响的假设影响是支持两种干预措施的重要预测因子,而言论自由信念则显著影响参与者倾向使用警告标签的意愿。通过对开放式回答的分析,我发现社群禁令常被视为过于粗糙,用户更倾向于与违规严重程度和类型成比例的制裁。我报告了关于违规社群可能强化不当行为的担忧,并展示了用户对制裁的选择如何受其感知效能影响。最后,我讨论了这些结果对HCI研究中网络危害与内容审核的启示意义。