Online abuse, a persistent aspect of social platform interactions, impacts user well-being and exposes flaws in platform designs that include insufficient detection efforts and inadequate victim protection measures. Ensuring safety in platform interactions requires the integration of victim perspectives in the design of abuse detection and response systems. In this paper, we conduct surveys (n = 230) and semi-structured interviews (n = 15) with students at a minority-serving institution in the US, to explore their experiences with abuse on a variety of social platforms, their defense strategies, and their recommendations for social platforms to improve abuse responses. We build on study findings to propose design requirements for abuse defense systems and discuss the role of privacy, anonymity, and abuse attribution requirements in their implementation. We introduce ARI, a blueprint for a unified, transparent, and personalized abuse response system for social platforms that sustainably detects abuse by leveraging the expertise of platform users, incentivized with proceeds obtained from abusers.
翻译:在线滥用行为作为社交平台互动中一个持续存在的问题,不仅影响用户福祉,也暴露出平台设计中的缺陷,包括检测力度不足和受害者保护措施不完善。要确保平台互动的安全性,必须在滥用检测与响应系统的设计中纳入受害者视角。本文通过对美国一所少数族裔服务型院校的学生开展问卷调查(n = 230)与半结构化访谈(n = 15),探究他们在各类社交平台上的滥用经历、防御策略以及对平台改进滥用应对机制的建议。基于研究结果,我们提出了滥用防御系统的设计需求,并探讨了隐私保护、匿名机制及滥用归因要求在系统实施中的作用。我们提出了ARI——一个为社交平台设计的统一、透明且个性化的滥用响应系统蓝图,该系统通过激励平台用户贡献专业知识(激励资金来源于滥用者支付的款项)来实现可持续的滥用行为检测。