LLM role-playing, i.e., using LLMs to simulate specific personas, has emerged as a key capability in various applications, such as companionship, content creation, and digital games. While current models effectively capture character tones and knowledge, simulating the inner thoughts behind their behaviors remains a challenge. Towards cognitive simulation in LLM role-play, previous efforts mainly suffer from two deficiencies: data with high-quality reasoning traces, and reliable reward signals aligned with human preferences. In this paper, we propose HER, a unified framework for cognitive-level persona simulation. HER introduces dual-layer thinking, which distinguishes characters' first-person thinking from LLMs' third-person thinking. To bridge these gaps, we curate reasoning-augmented role-playing data via reverse engineering and construct human-aligned principles and reward models. Leveraging these resources, we train HER models based on Qwen3-32B via supervised and reinforcement learning. Extensive experiments validate the effectiveness of our approach. Notably, our models significantly outperform the Qwen3-32B baseline, achieving a 30.26 improvement on the CoSER benchmark and a 14.97 gain on the Minimax Role-Play Bench. Our datasets, principles, and models will be released to facilitate future research.
翻译:大语言模型角色扮演,即利用大语言模型模拟特定人物角色,已成为陪伴、内容创作和数字游戏等多种应用中的关键能力。尽管现有模型能够有效捕捉角色的语气和知识,但模拟其行为背后的内在思维仍然是一个挑战。针对大语言模型角色扮演中的认知模拟,先前的研究主要存在两个不足:缺乏包含高质量推理轨迹的数据,以及缺少与人类偏好对齐的可靠奖励信号。本文提出HER,一个面向认知层面角色模拟的统一框架。HER引入了双层思维机制,区分角色的第一人称思维与大语言模型的第三人称思维。为弥合这些差距,我们通过逆向工程构建了推理增强的角色扮演数据,并建立了与人类对齐的原则和奖励模型。利用这些资源,我们基于Qwen3-32B模型,通过监督学习和强化学习训练了HER模型。大量实验验证了我们方法的有效性。值得注意的是,我们的模型显著超越了Qwen3-32B基线,在CoSER基准上实现了30.26分的提升,在Minimax角色扮演基准上取得了14.97分的增益。我们的数据集、原则和模型将公开发布,以促进未来研究。