Personal large language model (LLM) agents increasingly perform tasks that require access to user data, raising concerns about appropriate data disclosure. We show that relying solely on LLMs to make data-sharing decisions is insufficient. Prompting LLMs with general privacy norms fails to capture individual users' privacy preferences, while providing prior user data-sharing decisions through in-context learning (ICL) leads to unreliable and opaque reasoning. To address these limitations, we propose ARIEL (Agentic Reasoning with Individualized Entailment Logic), a framework that combines LLMs with rule-based logic to enable structured, personalized privacy reasoning. The core mechanism of ARIEL determines whether a user's prior decision on a data-sharing request $\textit{logically entails}$ the same decision for a new request. Experimental evaluations using advanced models and public datasets show that ARIEL reduces the F1 error rate for appropriate judgments by $\textbf{40.6%}$ compared to standard ICL-based reasoning, indicating that ARIEL is effective at correctly judging requests where the user would approve data sharing. These results demonstrate that integrating LLMs with logical entailment provides an effective and interpretable approach for automating personalized privacy decisions.
翻译:个人大型语言模型(LLM)智能体越来越多地执行需要访问用户数据的任务,这引发了关于数据披露适当性的担忧。我们指出,仅依赖LLM来做出数据共享决策是不够的。使用通用隐私规范提示LLM无法捕捉个体用户的隐私偏好,而通过上下文学习(ICL)提供先前的用户数据共享决策则会导致不可靠且不透明的推理。为解决这些局限性,我们提出了ARIEL(基于个体化蕴含逻辑的智能体推理),这是一个将LLM与基于规则的逻辑相结合以支持结构化、个性化隐私推理的框架。ARIEL的核心机制是判断用户对某个数据共享请求的先前决策是否在逻辑上蕴含对新请求的相同决策。使用先进模型和公共数据集的实验评估表明,与标准的基于ICL的推理相比,ARIEL将适当判断的F1错误率降低了40.6%,这表明ARIEL在正确判断用户会批准数据共享的请求方面是有效的。这些结果证明,将LLM与逻辑蕴含相结合,为自动化个性化隐私决策提供了一种有效且可解释的途径。