One way to personalize and steer generations from large language models (LLM) is to assign a persona: a role that describes how the user expects the LLM to behave (e.g., a helpful assistant, a teacher, a woman). This paper investigates how personas affect diverse aspects of model behavior. We assign to seven LLMs 162 personas from 12 categories spanning variables like gender, sexual orientation, and occupation. We prompt them to answer questions from five datasets covering objective (e.g., questions about math and history) and subjective tasks (e.g., questions about beliefs and values). We also compare persona's generations to two baseline settings: a control persona setting with 30 paraphrases of "a helpful assistant" to control for models' prompt sensitivity, and an empty persona setting where no persona is assigned. We find that for all models and datasets, personas show greater variability than the control setting and that some measures of persona behavior generalize across models.
翻译:个性化引导大型语言模型生成内容的一种方法是赋予其角色设定:即描述用户期望模型如何行事的角色(例如,一位乐于助人的助手、一位教师、一位女性)。本文研究角色设定如何影响模型行为的多个方面。我们为七个大型语言模型设定了来自12个类别的162种角色,涵盖性别、性取向和职业等变量。我们引导这些模型回答来自五个数据集的问题,这些问题覆盖客观任务(例如数学和历史问题)和主观任务(例如关于信仰和价值观的问题)。同时,我们将角色设定下的生成结果与两种基线设置进行比较:一是控制角色设置,包含30种对"乐于助人的助手"的转述,以控制模型对提示的敏感性;二是空角色设置,即不赋予任何角色。研究发现,在所有模型和数据集中,角色设定比控制设置表现出更大的变异性,且角色行为的某些度量指标在不同模型间具有普适性。