Counterspeech offers a non-repressive approach to moderate hate speech in online communities. Research has examined how counterspeech chatbots restrain hate speakers and support targets, but their impact on bystanders remains unclear. Therefore, we developed a counterspeech strategy framework and built \textit{Civilbot} for a mixed-method within-subjects study. Bystanders generally viewed Civilbot as credible and normative, though its shallow reasoning limited persuasiveness. Its behavioural effects were subtle: when performing well, it could guide participation or act as a stand-in; when performing poorly, it could discourage bystanders or motivate them to step in. Strategy proved critical: cognitive strategies that appeal to reason, especially when paired with a positive tone, were relatively effective, while mismatch of contexts and strategies could weaken impact. Based on these findings, we offer design insights for mobilizing bystanders and shaping online discourse, highlighting when to intervene and how to do so through reasoning-driven and context-aware strategies.
翻译:反言论为在线社区中的仇恨言论提供了一种非压制性的调节方法。现有研究已探讨了反言论聊天机器人如何约束仇恨言论者并支持目标群体,但其对旁观者的影响尚不明确。为此,我们构建了一个反言论策略框架,并开发了 \textit{Civilbot} 进行了一项混合方法的被试内研究。旁观者普遍认为 Civilbot 具有可信度和规范性,但其浅层的推理限制了说服力。其行为影响是微妙的:当表现良好时,它可以引导参与或充当替代者;当表现不佳时,它可能使旁观者气馁或激励他们介入。策略被证明至关重要:诉诸理性的认知策略,尤其是与积极语气结合时,相对有效,而情境与策略的不匹配可能削弱影响。基于这些发现,我们为动员旁观者和塑造在线话语提供了设计见解,强调了何时介入以及如何通过推理驱动和情境感知的策略进行干预。