AI creates and exacerbates privacy risks, yet practitioners lack effective resources to identify and mitigate these risks. We present Privy, a tool that guides practitioners without privacy expertise through structured privacy impact assessments to: (i) identify relevant risks in novel AI product concepts, and (ii) propose appropriate mitigations. Privy was shaped by a formative study with 11 practitioners, which informed two versions -- one LLM-powered, the other template-based. We evaluated these two versions of Privy through a between-subjects, controlled study with 24 separate practitioners, whose assessments were reviewed by 13 independent privacy experts. Results show that Privy helps practitioners produce privacy assessments that experts deemed high quality: practitioners identified relevant risks and proposed appropriate mitigation strategies. These effects were augmented in the LLM-powered version. Practitioners themselves rated Privy as being useful and usable, and their feedback illustrates how it helps overcome long-standing awareness, motivation, and ability barriers in privacy work.
翻译:人工智能(AI)在创造价值的同时也引发并加剧了隐私风险,然而从业者往往缺乏有效资源来识别和缓解这些风险。本文提出Privy,一种无需隐私专业知识的从业者也能使用的工具,它通过结构化的隐私影响评估引导从业者:(i)识别新型AI产品概念中的相关风险,以及(ii)提出适当的缓解措施。Privy的设计基于一项由11名从业者参与的初步研究,并据此开发了两个版本——一个由大语言模型(LLM)驱动,另一个基于模板。我们通过一项包含24名独立从业者的组间对照研究对这两个版本的Privy进行了评估,其评估结果由13位独立的隐私专家进行评审。结果表明,Privy能帮助从业者产出被专家认定为高质量的隐私评估:从业者识别出了相关风险并提出了恰当的缓解策略。这些效果在LLM驱动的版本中更为显著。从业者自身也认为Privy具有实用性和易用性,他们的反馈说明了该工具如何帮助克服隐私工作中长期存在的认知、动机和能力障碍。