Large language model (LLM)-based AI delegates are increasingly utilized to act on behalf of users, assisting them with a wide range of tasks through conversational interfaces. Despite their advantages, concerns arise regarding the potential risk of privacy leaks, particularly in scenarios involving social interactions. While existing research has focused on protecting privacy by limiting the access of AI delegates to sensitive user information, many social scenarios require disclosing private details to achieve desired outcomes, necessitating a balance between privacy protection and disclosure. To address this challenge, we conduct a pilot study to investigate user preferences for AI delegates across various social relations and task scenarios, and then propose a novel AI delegate system that enables privacy-conscious self-disclosure. Our user study demonstrates that the proposed AI delegate strategically protects privacy, pioneering its use in diverse and dynamic social interactions.
翻译:基于大语言模型(LLM)的AI代理正越来越多地被用于代表用户行事,通过对话界面协助其完成广泛的任务。尽管具有优势,但人们对其潜在的隐私泄露风险产生了担忧,尤其是在涉及社交互动的场景中。现有研究主要通过限制AI代理对敏感用户信息的访问来保护隐私,然而许多社交场景需要披露私人细节才能达成预期结果,这要求在隐私保护与披露之间取得平衡。为应对这一挑战,我们开展了一项试点研究,以探究用户在不同社会关系和任务场景下对AI代理的偏好,进而提出一种新颖的AI代理系统,该系统能够实现具有隐私意识的自我披露。我们的用户研究表明,所提出的AI代理能够策略性地保护隐私,并率先将其应用于多样且动态的社交互动中。