Personalization is crucial for aligning Large Language Model (LLM) outputs with individual user preferences and background knowledge. State-of-the-art solutions are based on retrieval augmentation, where relevant context from a user profile is retrieved for LLM consumption. These methods deal with a trade-off between exposing retrieved private data to cloud providers and relying on less capable local models. We introduce $P^3$, an interactive framework for high-quality personalization without revealing private profiles to server-side LLMs. In $P^3$, a large server-side model generates a sequence of $k$ draft tokens based solely on the user query, while a small client-side model, with retrieval access to the user's private profile, evaluates and modifies these drafts to better reflect user preferences. This process repeats until an end token is generated. Experiments on LaMP-QA, a recent benchmark consisting of three personalized question answering datasets, show that $P^3$ consistently outperforms both non-personalized server-side and personalized client-side baselines, achieving statistically significant improvements of $7.4%$ to $9%$ on average. Importantly, $P^3$ recovers $90.3%$ to $95.7%$ of the utility of a ``leaky'' upper-bound scenario in which the full profile is exposed to the large server-side model. Privacy analyses, including linkability and attribute inference attacks, indicate that $P^3$ preserves the privacy of a non-personalized server-side model, introducing only marginal additional leakage ($1.5%$--$3.5%$) compared to submitting a query without any personal context. Additionally, the framework is efficient for edge deployment, with the client-side model generating only $9.2%$ of the total tokens. These results demonstrate that $P^3$ provides a practical, effective solution for personalized generation with improved privacy.
翻译:个性化对于使大语言模型(LLM)的输出与个体用户偏好及背景知识对齐至关重要。当前最先进的解决方案基于检索增强技术,即从用户档案中检索相关上下文供LLM使用。这些方法面临一个权衡:要么将检索到的隐私数据暴露给云服务提供商,要么依赖能力较弱的本地模型。我们提出了 $P^3$,一个无需向服务器端LLM泄露隐私档案即可实现高质量个性化的交互式框架。在 $P^3$ 中,大型服务器端模型仅基于用户查询生成 $k$ 个草稿令牌序列,而一个能够检索用户隐私档案的小型客户端模型则对这些草稿进行评估和修改,以更好地反映用户偏好。此过程重复进行,直至生成结束令牌。在LaMP-QA(一个包含三个个性化问答数据集的最新基准)上的实验表明,$P^3$ 始终优于非个性化的服务器端基线以及个性化的客户端基线,平均实现了 $7.4%$ 到 $9%$ 的统计显著提升。重要的是,$P^3$ 恢复了“泄露”上限场景(即完整档案暴露给大型服务器端模型)下 $90.3%$ 到 $95.7%$ 的效用。隐私分析(包括可链接性攻击和属性推断攻击)表明,$P^3$ 保持了非个性化服务器端模型的隐私性,与提交不含任何个人上下文的查询相比,仅引入了微小的额外泄露($1.5%$--$3.5%$)。此外,该框架适用于边缘部署,效率高,客户端模型仅生成了总令牌数的 $9.2%$。这些结果表明,$P^3$ 为个性化生成提供了一个实用、有效且隐私性更强的解决方案。