Large language models (LLMs) are increasingly integrated into daily life through conversational interfaces, processing user data via natural language inputs and exhibiting advanced reasoning capabilities, which raises new concerns about user control over privacy. While much research has focused on potential privacy risks, less attention has been paid to the data control mechanisms these platforms provide. This study examines six conversational LLM platforms, analyzing how they define and implement features for users to access, edit, delete, and share data. Our analysis reveals an emerging paradigm of data control in conversational LLM platforms, where user data is generated and derived through interaction itself, natural language enables flexible yet often ambiguous control, and multi-user interactions with shared data raise questions of co-ownership and governance. Based on these findings, we offer practical insights for platform developers, policymakers, and researchers to design more effective and usable privacy controls in LLM-powered conversational interactions.
翻译:大型语言模型(LLM)正日益通过对话界面融入日常生活,它们通过自然语言输入处理用户数据,并展现出先进的推理能力,这引发了关于用户隐私控制的新关切。尽管大量研究聚焦于潜在的隐私风险,但针对这些平台提供的数据控制机制的关注却相对不足。本研究考察了六个对话式LLM平台,分析了它们如何定义和实现供用户访问、编辑、删除和共享数据的功能。我们的分析揭示了对话式LLM平台中一种新兴的数据控制范式:用户数据通过交互本身生成和衍生,自然语言提供了灵活但往往模糊的控制方式,而涉及共享数据的多用户交互则引发了共同所有权与治理的问题。基于这些发现,我们为平台开发者、政策制定者和研究人员提供了实用的见解,以设计更有效、更可用的隐私控制机制,用于LLM驱动的对话交互。