The performance of Large Language Models (LLMs) degrades from the temporal drift between data used for model training and newer text seen during inference. One understudied avenue of language change causing data drift is the emergence of neologisms -- new word forms -- over time. We create a diverse resource of recent English neologisms by using several popular collection methods. We analyze temporal drift using neologisms by comparing sentences containing new words with near-identical sentences that replace neologisms with existing substitute words. Model performance is nearly halved in machine translation when a single neologism is introduced in a sentence. Motivated by these results, we construct a benchmark to evaluate LLMs' ability to generalize to neologisms with various natural language understanding tasks and model perplexity. Models with later knowledge cutoff dates yield lower perplexities and perform better in downstream tasks. LLMs are also affected differently based on the linguistic origins of words, indicating that neologisms are complex for static LLMs to address. We will release our benchmark and code for reproducing our experiments.
翻译:大语言模型(LLMs)的性能会因模型训练数据与推理时新文本之间的时间漂移而下降。语言变化导致数据漂移的一个未被充分研究的途径是,随着时间推移新词(新词形)的出现。我们通过使用多种流行收集方法,创建了一个近期英语新词的多样化资源。我们通过对比包含新词的句子与使用现有替代词替换新词后的近乎相同句子,利用新词分析时间漂移。当在句子中引入单个新词时,机器翻译中的模型性能几乎减半。受这些结果启发,我们构建了一个基准,用于评估LLMs在各种自然语言理解任务和模型困惑度方面对新词的泛化能力。知识截止日期越晚的模型,其困惑度越低,并在下游任务中表现更好。此外,基于词语的语言学来源,LLMs受到的影响也不同,这表明对于静态LLMs而言,新词的处理具有复杂性。我们将发布我们的基准和用于复现实验的代码。