Role-play prompting is known to steer the behavior of language models by injecting a persona into the prompt, improving their zero-shot reasoning capabilities. However, such improvements are inconsistent across different tasks or instances. This inconsistency suggests that zero-shot and role-play prompting may offer complementary strengths rather than one being universally superior. Building on this insight, we propose Persona Switch, a novel decoding method that dynamically combines the benefits of both prompting strategies. Our method proceeds step-by-step, selecting the better output between zero-shot and role-play prompting at each step by comparing their output confidence, as measured by the logit gap. Experiments with widely-used LLMs demonstrate that Persona Switch consistently outperforms competitive baselines, achieving up to 5.13% accuracy improvement. Furthermore, we show that output confidence serves as an informative measure for selecting the more reliable output.
翻译:角色扮演提示技术通过向提示中注入特定角色来引导语言模型的行为,从而提升其零样本推理能力。然而,这种改进在不同任务或实例间并不一致。这种不一致性表明,零样本提示与角色扮演提示可能具有互补优势,而非其中一种方法普遍优于另一种。基于这一洞见,我们提出了角色切换(Persona Switch)——一种新颖的解码方法,能够动态结合两种提示策略的优势。我们的方法逐步推进,通过比较零样本提示与角色扮演提示的输出置信度(以对数概率间隙衡量),在每一步选择更优的输出结果。在广泛使用的大语言模型上进行的实验表明,角色切换方法持续优于现有基线模型,最高可实现5.13%的准确率提升。此外,我们证明输出置信度可作为选择更可靠输出的有效度量指标。