Large Language Models (LLMs) are increasingly deployed in domains such as education, mental health and customer support, where stable and consistent personas are critical for reliability. Yet, existing studies focus on narrative or role-playing tasks and overlook how adversarial conversational history alone can reshape induced personas. Black-box persona manipulation remains unexplored, raising concerns for robustness in realistic interactions. In response, we introduce the task of persona editing, which adversarially steers LLM traits through user-side inputs under a black-box, inference-only setting. To this end, we propose PHISH (Persona Hijacking via Implicit Steering in History), the first framework to expose a new vulnerability in LLM safety that embeds semantically loaded cues into user queries to gradually induce reverse personas. We also define a metric to quantify attack success. Across 3 benchmarks and 8 LLMs, PHISH predictably shifts personas, triggers collateral changes in correlated traits, and exhibits stronger effects in multi-turn settings. In high-risk domains mental health, tutoring, and customer support, PHISH reliably manipulates personas, validated by both human and LLM-as-Judge evaluations. Importantly, PHISH causes only a small reduction in reasoning benchmark performance, leaving overall utility largely intact while still enabling significant persona manipulation. While current guardrails offer partial protection, they remain brittle under sustained attack. Our findings expose new vulnerabilities in personas and highlight the need for context-resilient persona in LLMs. Our codebase and dataset is available at: https://github.com/Jivnesh/PHISH
翻译:大型语言模型(LLMs)正日益部署于教育、心理健康和客户支持等领域,这些领域要求稳定且一致的人格特征以确保可靠性。然而,现有研究多集中于叙事或角色扮演任务,忽视了仅通过对抗性对话历史即可重塑诱导人格的可能性。黑盒人格操控机制尚未得到充分探索,这引发了现实交互场景中模型鲁棒性的担忧。为此,我们提出了人格编辑任务,即在黑盒、仅推理的设置下,通过用户端输入对抗性地引导LLM的人格特质。基于此,我们提出了PHISH(通过历史隐式引导的人格劫持)框架——首个揭示LLM安全新漏洞的方法,该方法将语义负载线索嵌入用户查询,逐步诱导反向人格。我们还定义了量化攻击成功率的评估指标。在3个基准测试和8个LLM上的实验表明:PHISH能够可预测地改变模型人格,引发相关特质的连锁变化,并在多轮对话场景中表现出更强的效应。在心理健康、教育辅导和客户支持等高风险领域,PHISH可稳定实现人格操控,该结论通过了人工评估和LLM-as-Judge验证。值得注意的是,PHISH仅导致推理基准性能的小幅下降,在保持整体实用性的同时仍能实现显著的人格操控。虽然现有防护机制能提供部分保护,但在持续攻击下仍显脆弱。我们的研究揭示了人格系统的新漏洞,并强调了LLM需要具备上下文鲁棒性的人格机制。代码库与数据集已开源:https://github.com/Jivnesh/PHISH