The rise of end-user applications powered by large language models (LLMs), including both conversational interfaces and add-ons to existing graphical user interfaces (GUIs), introduces new privacy challenges. However, many users remain unaware of the risks. This paper explores methods to increase user awareness of privacy risks associated with LLMs in end-user applications. We conducted five co-design workshops to uncover user privacy concerns and their demand for contextual privacy information within LLMs. Based on these insights, we developed CLEAR (Contextual LLM-Empowered Privacy Policy Analysis and Risk Generation), a just-in-time contextual assistant designed to help users identify sensitive information, summarize relevant privacy policies, and highlight potential risks when sharing information with LLMs. We evaluated the usability and usefulness of CLEAR across in two example domains: ChatGPT and the Gemini plugin in Gmail. Our findings demonstrated that CLEAR is easy to use and improves user understanding of data practices and privacy risks. We also discussed LLM's duality in posing and mitigating privacy risks, offering design and policy implications.
翻译:随着基于大语言模型(LLM)的终端用户应用(包括对话式界面和现有图形用户界面(GUI)的插件)的兴起,新的隐私挑战也随之出现。然而,许多用户仍未意识到这些风险。本文探讨了提高用户对终端用户应用中LLM相关隐私风险认知的方法。我们开展了五次协同设计研讨会,以揭示用户的隐私关切及其对LLM内部上下文隐私信息的需求。基于这些见解,我们开发了CLEAR(上下文LLM赋能的隐私政策分析与风险生成),这是一个即时上下文助手,旨在帮助用户在向LLM共享信息时识别敏感信息、总结相关隐私政策并突出潜在风险。我们在两个示例领域(ChatGPT和Gmail中的Gemini插件)评估了CLEAR的可用性和实用性。我们的研究结果表明,CLEAR易于使用,并能提升用户对数据实践和隐私风险的理解。我们还讨论了LLM在引发和缓解隐私风险方面的双重性,并提出了设计与政策启示。