A rapidly growing number of applications rely on a small set of closed-source language models (LMs). This dependency might introduce novel security risks if LMs develop self-recognition capabilities. Inspired by human identity verification methods, we propose a novel approach for assessing self-recognition in LMs using model-generated "security questions". Our test can be externally administered to keep track of frontier models as it does not require access to internal model parameters or output probabilities. We use our test to examine self-recognition in ten of the most capable open- and closed-source LMs currently publicly available. Our extensive experiments found no empirical evidence of general or consistent self-recognition in any examined LM. Instead, our results suggest that given a set of alternatives, LMs seek to pick the "best" answer, regardless of its origin. Moreover, we find indications that preferences about which models produce the best answers are consistent across LMs. We additionally uncover novel insights on position bias considerations for LMs in multiple-choice settings.
翻译:随着越来越多的应用依赖于少数闭源语言模型(LMs),如果这些模型发展出自我识别能力,可能会引入新的安全风险。受人类身份验证方法的启发,我们提出了一种利用模型生成的“安全问题”来评估语言模型自我识别能力的新方法。我们的测试可以从外部实施,以跟踪前沿模型,因为它不需要访问模型的内部参数或输出概率。我们使用该测试检查了当前公开可用的十个最先进的开源和闭源语言模型。大量实验未在任何被检模型中观察到普遍或一致的自我识别能力的经验证据。相反,我们的结果表明,给定一组备选答案时,语言模型倾向于选择“最佳”答案,而不考虑其来源。此外,我们发现不同语言模型对于哪个模型能产生最佳答案的偏好具有一致性。我们还揭示了在多项选择设置中语言模型存在位置偏差的新见解。