Centralized content moderation paradigm both falls short and over-reaches: 1) it fails to account for the subjective nature of harm, and 2) it acts with blunt suppression in response to content deemed harmful, even when such content can be salvaged. We first investigate this through formative interviews, documenting how seemingly benign content becomes harmful due to individual life experiences. Based on these insights, we developed DIY-MOD, a browser extension that operationalizes a new paradigm: personalized content transformation. Operating on a user's own definition of harm, DIY-MOD transforms sensitive elements within content in real-time instead of suppressing the content itself. The system selects the most appropriate transformation for a piece of content from a diverse palette--from obfuscation to artistic stylizing--to match the user's specific needs while preserving the content's informational value. Our two user studies demonstrate that this approach increases users' sense of agency and safety, enabling them to engage with content and communities they previously needed to avoid.
翻译:集中式内容审核范式既存在不足又过度干预:1)它未能考虑伤害的主观性;2)对被视为有害的内容采取生硬的压制措施,即使这些内容本可挽救。我们首先通过形成性访谈研究这一问题,记录了看似良性的内容如何因个人生活经历而变得有害。基于这些发现,我们开发了DIY-MOD——一款实现新范式的浏览器扩展:个性化内容转换。根据用户自定义的伤害标准,DIY-MOD实时转换内容中的敏感元素而非压制内容本身。该系统从多样化方案库(从模糊化处理到艺术风格化)中为每项内容选择最合适的转换方式,在匹配用户特定需求的同时保留内容的信息价值。我们的两项用户研究表明,该方法能增强用户的自主权与安全感,使其能够接触以往需要回避的内容与社群。