Contextualized embeddings based on large language models (LLMs) are available for various languages, but their coverage is often limited for lower resourced languages. Using LLMs for such languages is often difficult due to a high computational cost; not only during training, but also during inference. Static word embeddings are much more resource-efficient ("green"), and thus still provide value, particularly for very low-resource languages. There is, however, a notable lack of comprehensive repositories with such embeddings for diverse languages. To address this gap, we present GrEmLIn, a centralized repository of green, static baseline embeddings for 87 mid- and low-resource languages. We compute GrEmLIn embeddings with a novel method that enhances GloVe embeddings by integrating multilingual graph knowledge, which makes our static embeddings competitive with LLM representations, while being parameter-free at inference time. Our experiments demonstrate that GrEmLIn embeddings outperform state-of-the-art contextualized embeddings from E5 on the task of lexical similarity. They remain competitive in extrinsic evaluation tasks like sentiment analysis and natural language inference, with average performance gaps of just 5-10\% or less compared to state-of-the-art models, given a sufficient vocabulary overlap with the target task, and underperform only on topic classification. Our code and embeddings are publicly available at https://huggingface.co/DFKI.
翻译:基于大语言模型(LLM)的上下文嵌入已适用于多种语言,但对低资源语言的覆盖往往有限。由于计算成本高昂,对此类语言使用LLM通常较为困难——不仅在训练阶段,在推理阶段亦是如此。静态词嵌入的资源效率显著更高("绿色"),因此仍具有重要价值,尤其对于极低资源语言。然而,目前明显缺乏涵盖多种语言的此类嵌入综合存储库。为填补这一空白,我们提出了GrEmLIn——一个为87种中低资源语言提供绿色静态基线嵌入的集中式存储库。我们通过一种新颖方法计算GrEmLIn嵌入:通过整合多语言图知识来增强GloVe嵌入,这使得我们的静态嵌入能够与LLM表征相竞争,同时在推理时无需参数。实验表明,在词汇相似性任务上,GrEmLIn嵌入优于E5的最先进上下文嵌入。在情感分析和自然语言推理等外部评估任务中,只要与目标任务有足够的词汇重叠,其平均性能差距仅比最先进模型低5-10%或更少,仅在主题分类任务上表现稍逊。我们的代码与嵌入已公开于https://huggingface.co/DFKI。