Recently, the data protection practices of researchers in human-computer interaction and elsewhere have gained attention. Initial results suggest that researchers struggle with anonymization, partly due to a lack of clear, actionable guidance. In this work, we propose simulating re-identification attacks using the approach of red teaming versus blue teaming: a technique commonly employed in security testing, where one team tries to re-identify data, and the other team tries to prevent it. We discuss our experience applying this method to data collected in a mixed-methods study in human-centered privacy. We present usable materials for researchers to apply red teaming when anonymizing and publishing their studies' data.
翻译:近期,人机交互及其他领域研究者的数据保护实践受到关注。初步结果表明,研究者普遍在数据匿名化方面面临挑战,部分原因在于缺乏明确且可操作的指导。本研究提出采用红队对抗蓝队的方法模拟重识别攻击:这是一种在安全测试中常用的技术,其中一队尝试重识别数据,另一队则试图阻止此类行为。我们分享了将这种方法应用于以人为中心的隐私混合方法研究数据的实践经验,并提供了可供研究者在匿名化及发布研究数据时直接使用的红队测试工具包。