Personality control in Role-Playing Agents (RPAs) is commonly achieved via training-free methods that inject persona descriptions and memory through prompts or retrieval-augmented generation, or via supervised fine-tuning (SFT) on persona-specific corpora. While SFT can be effective, it requires persona-labeled data and retraining for new roles, limiting flexibility. In contrast, prompt- and RAG-based signals are easy to apply but can be diluted in long dialogues, leading to drifting and sometimes inconsistent persona behavior. To address this, we propose a contrastive Sparse AutoEncoder (SAE) framework that learns facet-level personality control vectors aligned with the Big Five 30-facet model. A new 15,000-sample leakage-controlled corpus is constructed to provide balanced supervision for each facet. The learned vectors are integrated into the model's residual space and dynamically selected by a trait-activated routing module, enabling precise and interpretable personality steering. Experiments on Large Language Models (LLMs) show that the proposed method maintains stable character fidelity and output quality across contextualized settings, outperforming Contrastive Activation Addition (CAA) and prompt-only baselines. The combined SAE+Prompt configuration achieves the best overall performance, confirming that contrastively trained latent vectors can enhance persona control while preserving dialogue coherence.
翻译:角色扮演智能体的人格控制通常通过免训练方法实现,例如通过提示或检索增强生成注入角色描述与记忆,或通过对特定角色语料进行监督微调。尽管监督微调可能有效,但需要角色标注数据且需针对新角色重新训练,灵活性受限。相比之下,基于提示与检索增强生成的信号易于应用,但在长对话中易被稀释,导致角色漂移与行为不一致。为解决此问题,我们提出一种对比稀疏自编码器框架,可学习与五因素模型30个分面对齐的分面级人格控制向量。我们构建了包含15,000个样本的防泄漏语料库,为每个分面提供平衡监督。学习得到的向量被集成至模型残差空间,并通过特质激活路由模块动态选择,从而实现精确且可解释的人格调控。在大语言模型上的实验表明,所提方法在情境化设定中能保持稳定的角色保真度与输出质量,优于对比激活添加与纯提示基线。稀疏自编码器与提示结合的配置实现了最佳综合性能,证实对比训练的隐向量能在保持对话连贯性的同时增强角色控制能力。