We show that perceptual hashing, crucial for detecting and removing image-based sexual abuse (IBSA) online, faces vulnerabilities from low-budget inversion attacks based on generative AI. This jeopardizes the privacy of users, especially vulnerable groups. We advocate to implement secure hash matching in IBSA removal tools to mitigate potentially fatal consequences.
翻译:本文揭示,用于在线检测与移除图像性虐待内容的关键技术——感知哈希,正面临基于生成式人工智能的低成本反转攻击威胁。此类攻击将危及用户隐私,尤其对弱势群体构成显著风险。我们主张在图像性虐待内容移除工具中实施安全哈希匹配机制,以规避可能引发的严重后果。