Recent high profile incidents in which the use of Large Language Models (LLMs) resulted in significant harm to individuals have brought about a growing interest in AI safety. One reason LLM safety issues occur is that models often have at least some non-zero probability of producing harmful outputs. In this work, we explore the following scenario: imagine an AI safety auditor is searching for catastrophic responses from an LLM (e.g. a "yes" responses to "can I fire an employee for being pregnant?"), and is able to query the model a limited number times (e.g. 1000 times). What is a strategy for querying the model that would efficiently find those failure responses? To this end, we propose output scouting: an approach that aims to generate semantically fluent outputs to a given prompt matching any target probability distribution. We then run experiments using two LLMs and find numerous examples of catastrophic responses. We conclude with a discussion that includes advice for practitioners who are looking to implement LLM auditing for catastrophic responses. We also release an open-source toolkit (https://github.com/joaopfonseca/outputscouting) that implements our auditing framework using the Hugging Face transformers library.
翻译:近期,大型语言模型(LLMs)的使用对个人造成严重伤害的高曝光事件引发了人们对人工智能安全日益增长的关注。LLM安全问题发生的原因之一是模型往往至少存在一定的非零概率产生有害输出。在本研究中,我们探讨以下场景:假设一位AI安全审计员正在搜索LLM中的灾难性响应(例如对“我可以因为员工怀孕而解雇她吗?”回答“可以”),且仅能对模型进行有限次数的查询(例如1000次)。何种查询策略能高效地发现这些故障响应?为此,我们提出输出侦察方法:该技术旨在针对给定提示生成符合任意目标概率分布的语义流畅输出。我们随后使用两种LLM进行实验,发现了大量灾难性响应的实例。最后我们展开讨论,为寻求实施LLM灾难性响应审计的实践者提供建议。同时,我们开源了一个基于Hugging Face transformers库实现的审计工具包(https://github.com/joaopfonseca/outputscouting)。