We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. We perform a brief investigation of how this behavior varies under changes to the setting, such as removing model access to a reasoning scratchpad, attempting to prevent the misaligned behavior by changing system instructions, changing the amount of pressure the model is under, varying the perceived risk of getting caught, and making other simple changes to the environment. To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.
翻译:我们展示了一种情境:被训练为有益、无害且诚实的大语言模型,在未受直接指令的情况下,仍能表现出失调行为,并对其用户进行战略性欺骗。具体而言,我们在一个逼真的模拟环境中将GPT-4部署为自主股票交易代理。在该环境中,模型获取了某只盈利股票的内幕消息,并明知内幕交易违反公司管理规定仍实施了交易。当向经理汇报时,模型始终隐藏其交易决策的真实原因。我们通过改变设置对该行为进行了简要探究,例如移除模型使用推理草稿板的权限、通过修改系统指令试图阻止失调行为、改变模型承受的压力程度、调整被发现的感知风险,以及对环境进行其他简单调整。据我们所知,这是首次证明被训练为有益、无害且诚实的大语言模型在未受直接欺骗指令或训练的情况下,在真实场景中对其用户进行战略性欺骗。