Hate speech remains a pressing challenge on social media, where platform moderation often fails to protect targeted users. Personal moderation tools that let users decide how content is filtered can address some of these shortcomings. However, it remains an open question on which screens (e.g., the comments, the reels tab, or the home feed) users want personal moderation and which features they value most. To address these gaps, we conducted a three-wave Delphi study with 40 activists who experienced hate speech. We combined quantitative ratings and rankings with open questions about required features. Participants prioritized personal moderation for conversational and algorithmically curated screens. They valued features allowing for reversibility and oversight across screens, while input-based, content-type specific, and highly automated features are more screen specific. We discuss the importance of personal moderation and offer user-centered design recommendations for personal moderation on Instagram.
翻译:仇恨言论在社交媒体上仍是一个紧迫的挑战,平台审核机制往往未能有效保护目标用户。允许用户自主决定内容过滤方式的个人审核工具可以弥补部分缺陷。然而,用户希望在哪些界面(如评论区、Reels标签页或首页动态)进行个人审核,以及他们最重视哪些功能,仍是待解决的问题。为填补这些研究空白,我们与40名曾遭受仇恨言论的社运人士开展了一项三轮德尔菲研究。我们结合定量评分与排序,并辅以关于必要功能的开放式提问。参与者优先考虑在对话性界面及算法推荐界面实施个人审核。他们重视允许跨界面操作可逆性及全局监督的功能,而基于输入类型、内容类别特定及高度自动化的功能则更具界面专属性。我们探讨了个人审核的重要性,并为Instagram的个人审核功能提出了以用户为中心的设计建议。