To deploy language models safely, it is crucial that they abstain from responding to inappropriate requests. Several prior studies test the safety promises of models based on their effectiveness in blocking malicious requests. In this work, we focus on evaluating the underlying techniques that cause models to abstain. We create SELECT, a benchmark derived from a set of benign concepts (e.g., "rivers") from a knowledge graph. The nature of SELECT enables us to isolate the effects of abstention techniques from other safety training procedures, as well as evaluate their generalization and specificity. Using SELECT, we benchmark different abstention techniques over six open-weight and closed-source models. We find that the examined techniques indeed cause models to abstain with over $80\%$ abstention rates. However, these techniques are not as effective for descendants of the target concepts, with refusal rates declining by $19\%$. We also characterize the generalization-vs-specificity trade-offs for different techniques. Overall, no single technique is invariably better than the others. Our findings call for a careful evaluation of different aspects of abstention, and hopefully inform practitioners of various trade-offs involved.
翻译:为了安全部署语言模型,关键在于它们能够拒绝响应不适当的请求。先前多项研究基于模型拦截恶意请求的有效性来检验其安全性承诺。本研究聚焦于评估导致模型弃权的底层技术。我们构建了SELECT基准,该基准源自知识图谱中的一组良性概念(如“河流”)。SELECT的特性使我们能够将弃权技术的影响与其他安全训练流程相隔离,并评估其泛化能力与特异性。通过SELECT,我们在六个开源权重模型与闭源模型上对不同弃权技术进行了基准测试。研究发现,所检验的技术确实能导致模型以超过$80\%$的弃权率拒绝响应。然而,这些技术对于目标概念的下位概念效果欠佳,拒绝率下降达$19\%$。我们还刻画了不同技术在泛化性与特异性之间的权衡关系。总体而言,没有任何单一技术始终优于其他技术。我们的研究结果呼吁对弃权技术的不同方面进行细致评估,并期望能为实践者揭示其中涉及的各种权衡关系。