The dynamic nature of language, particularly evident in the realm of slang and memes on the Internet, poses serious challenges to the adaptability of large language models (LLMs). Traditionally anchored to static datasets, these models often struggle to keep up with the rapid linguistic evolution characteristic of online communities. This research aims to bridge this gap by enhancing LLMs' comprehension of the evolving new concepts on the Internet, without the high cost of continual retraining. In pursuit of this goal, we introduce $\textbf{SLANG}$, a benchmark designed to autonomously integrate novel data and assess LLMs' ability to comprehend emerging concepts, alongside $\textbf{FOCUS}$, an approach uses causal inference to enhance LLMs to understand new phrases and their colloquial context. Our benchmark and approach involves understanding real-world instances of linguistic shifts, serving as contextual beacons, to form more precise and contextually relevant connections between newly emerging expressions and their meanings. The empirical analysis shows that our causal inference-based approach outperforms the baseline methods in terms of precision and relevance in the comprehension of Internet slang and memes.
翻译:语言的动态特性,尤其在互联网俚语和网络迷因领域表现得尤为明显,这对大型语言模型(LLMs)的适应能力构成了严峻挑战。传统上基于静态数据集的这些模型,往往难以跟上在线社区特有的快速语言演变。本研究旨在弥合这一差距,在不进行高成本持续再训练的前提下,增强LLMs对互联网上不断演变的新概念的理解能力。为实现这一目标,我们提出了$\textbf{SLANG}$——一个旨在自主整合新数据并评估LLMs理解新兴概念能力的基准,以及$\textbf{FOCUS}$——一种利用因果推理来增强LLMs理解新短语及其口语化语境的方法。我们的基准和方法涉及理解现实世界中语言演变的实例,将其作为上下文指引,从而在新出现的表达方式与其含义之间建立更精确且与上下文相关的联系。实证分析表明,我们基于因果推理的方法在理解互联网俚语和网络迷因的精确性和相关性方面,均优于基线方法。