The disconnect between tokenizer creation and model training in language models allows for specific inputs, such as the infamous SolidGoldMagikarp token, to induce unwanted model behaviour. Although such `glitch tokens', tokens present in the tokenizer vocabulary but that are nearly or entirely absent during model training, have been observed across various models, a reliable method to identify and address them has been missing. We present a comprehensive analysis of Large Language Model tokenizers, specifically targeting this issue of detecting under-trained tokens. Through a combination of tokenizer analysis, model weight-based indicators, and prompting techniques, we develop novel and effective methods for automatically detecting these problematic tokens. Our findings demonstrate the prevalence of such tokens across a diverse set of models and provide insights into improving the efficiency and safety of language models.
翻译:语言模型中分词器创建与模型训练之间的脱节,会导致特定输入(如臭名昭著的SolidGoldMagikarp标记)引发模型的不良行为。尽管此类“故障标记”——即存在于分词器词汇表中但在模型训练期间几乎或完全未出现的标记——已在多种模型中被观察到,但一直缺乏可靠的方法来识别和处理它们。本文对大型语言模型的分词器进行了全面分析,特别针对检测训练不足标记这一问题。通过结合分词器分析、基于模型权重的指标以及提示技术,我们开发了新颖且有效的方法来自动检测这些有问题的标记。我们的研究结果表明,此类标记在多种模型中普遍存在,并为提高语言模型的效率与安全性提供了见解。