Learning meaningful word embeddings is key to training a robust language model. The recent rise of Large Language Models (LLMs) has provided us with many new word/sentence/document embedding models. Although LLMs have shown remarkable advancement in various NLP tasks, it is still unclear whether the performance improvement is merely because of scale or whether underlying embeddings they produce significantly differ from classical encoding models like Sentence-BERT (SBERT) or Universal Sentence Encoder (USE). This paper systematically investigates this issue by comparing classical word embedding techniques against LLM-based word embeddings in terms of their latent vector semantics. Our results show that LLMs tend to cluster semantically related words more tightly than classical models. LLMs also yield higher average accuracy on the Bigger Analogy Test Set (BATS) over classical methods. Finally, some LLMs tend to produce word embeddings similar to SBERT, a relatively lighter classical model.
翻译:学习有意义的词嵌入是训练稳健语言模型的关键。近年来大语言模型(LLMs)的兴起,为我们提供了许多新的词/句/文档嵌入模型。尽管LLMs在各类自然语言处理任务中展现出显著进步,但其性能提升究竟是源于规模效应,还是其生成的底层嵌入与Sentence-BERT(SBERT)或Universal Sentence Encoder(USE)等经典编码模型存在本质差异,仍不明确。本文通过系统对比经典词嵌入技术与基于LLMs的词嵌入在潜在向量语义层面的表现,对该问题进行了深入探究。结果表明:相较于经典模型,LLMs倾向于将语义相关词汇更紧密地聚类;在大型类比测试集(BATS)上,LLMs的平均准确率也更高。此外,部分LLMs生成的词嵌入与相对轻量的经典模型SBERT具有相似性。