The rise of powerful AI models, more formally $\textit{General-Purpose AI Systems}$ (GPAIS), has led to impressive leaps in performance across a wide range of tasks. At the same time, researchers and practitioners alike have raised a number of privacy concerns, resulting in a wealth of literature covering various privacy risks and vulnerabilities of AI models. Works surveying such risks provide differing focuses, leading to disparate sets of privacy risks with no clear unifying taxonomy. We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS, as well as proposed mitigation strategies. The developed privacy framework strives to unify the identified privacy risks and mitigations at a technical level that is accessible to non-experts. This serves as the basis for a practitioner-focused interview study to assess technical stakeholder perceptions of privacy risks and mitigations in GPAIS.
翻译:随着强大人工智能模型(更正式地称为$\textit{通用人工智能系统}$,GPAIS)的兴起,其在广泛任务上的性能取得了令人瞩目的飞跃。与此同时,研究人员和从业者都提出了诸多隐私担忧,由此产生了大量涵盖人工智能模型各种隐私风险与脆弱性的文献。综述此类风险的文献侧重点各异,导致形成了互不统一、缺乏明确分类体系的隐私风险集合。我们对这些综述论文进行了系统性梳理,旨在提供一个关于GPAIS隐私风险及相应缓解策略的简明且实用的概览。所构建的隐私框架力求在非专业人士也能理解的技术层面上,统一已识别的隐私风险与缓解措施。这为一项聚焦于从业者的访谈研究奠定了基础,该研究旨在评估技术相关方对GPAIS隐私风险及缓解措施的认知。