Role-playing systems powered by large language models (LLMs) have become increasingly influential in emotional communication applications. However, these systems are susceptible to character hallucinations, where the model deviates from predefined character roles and generates responses that are inconsistent with the intended persona. This paper presents the first systematic analysis of character hallucination from an attack perspective, introducing the RoleBreak framework. Our framework identifies two core mechanisms-query sparsity and role-query conflict-as key factors driving character hallucination. Leveraging these insights, we construct a novel dataset, RoleBreakEval, to evaluate existing hallucination mitigation techniques. Our experiments reveal that even enhanced models trained to minimize hallucination remain vulnerable to attacks. To address these vulnerabilities, we propose a novel defence strategy, the Narrator Mode, which generates supplemental context through narration to mitigate role-query conflicts and improve query generalization. Experimental results demonstrate that Narrator Mode significantly outperforms traditional refusal-based strategies by reducing hallucinations, enhancing fidelity to character roles and queries, and improving overall narrative coherence.
翻译:基于大语言模型(LLM)的角色扮演系统在情感交流应用中日益重要。然而,这些系统容易产生角色幻觉,即模型偏离预定义的角色设定,生成与预期角色不一致的回应。本文首次从攻击角度对角色幻觉进行系统性分析,提出了RoleBreak框架。该框架识别出查询稀疏性和角色-查询冲突这两个核心机制是驱动角色幻觉的关键因素。基于这些发现,我们构建了一个新颖的数据集RoleBreakEval,用于评估现有的幻觉缓解技术。实验表明,即使经过训练以减少幻觉的增强模型,在面对攻击时依然脆弱。为应对这些漏洞,我们提出了一种新颖的防御策略——叙述者模式,该模式通过生成补充性的叙述上下文来缓解角色-查询冲突,并提升查询泛化能力。实验结果表明,叙述者模式在减少幻觉、增强对角色设定和查询的忠实度以及提升整体叙事连贯性方面,显著优于传统的基于拒绝的防御策略。