Large language models (LLMs) typically generate identical or similar responses for all users given the same prompt, posing serious safety risks in high-stakes applications where user vulnerabilities differ widely. Existing safety evaluations primarily rely on context-independent metrics - such as factuality, bias, or toxicity - overlooking the fact that the same response may carry divergent risks depending on the user's background or condition. We introduce personalized safety to fill this gap and present PENGUIN - a benchmark comprising 14,000 scenarios across seven sensitive domains with both context-rich and context-free variants. Evaluating six leading LLMs, we demonstrate that personalized user information significantly improves safety scores by 43.2%, confirming the effectiveness of personalization in safety alignment. However, not all context attributes contribute equally to safety enhancement. To address this, we develop RAISE - a training-free, two-stage agent framework that strategically acquires user-specific background. RAISE improves safety scores by up to 31.6% over six vanilla LLMs, while maintaining a low interaction cost of just 2.7 user queries on average. Our findings highlight the importance of selective information gathering in safety-critical domains and offer a practical solution for personalizing LLM responses without model retraining. This work establishes a foundation for safety research that adapts to individual user contexts rather than assuming a universal harm standard.
翻译:大型语言模型(LLMs)通常针对相同提示为所有用户生成相同或相似的响应,这在用户脆弱性差异巨大的高风险应用中构成严重安全隐患。现有安全评估主要依赖上下文无关的指标——如事实准确性、偏见或毒性——忽略了同一响应可能因用户背景或状况而产生不同风险的事实。我们引入个性化安全以填补这一空白,并提出PENGUIN基准测试集,该数据集涵盖七个敏感领域的14,000个场景,包含上下文丰富与上下文无关两种变体。通过对六个主流LLM的评估,我们发现个性化用户信息使安全评分显著提升43.2%,证实了个性化在安全对齐中的有效性。然而,并非所有上下文属性对安全增强的贡献均等。为此,我们开发了RAISE——一种无需训练的双阶段智能体框架,能够策略性地获取用户特定背景信息。RAISE在六个基础LLM上将安全评分最高提升31.6%,同时保持平均仅需2.7次用户查询的低交互成本。我们的研究结果凸显了在安全关键领域选择性信息收集的重要性,并为无需模型重训练的LLM响应个性化提供了实用解决方案。这项工作为适应个体用户情境而非假定通用危害标准的安全研究奠定了理论基础。