Multilingual large language models (LLMs) have gained prominence, but concerns arise regarding their reliability beyond English. This study addresses the gap in cross-lingual semantic evaluation by introducing a novel benchmark for cross-lingual sense disambiguation, StingrayBench. In this paper, we demonstrate using false friends -- words that are orthographically similar but have completely different meanings in two languages -- as a possible approach to pinpoint the limitation of cross-lingual sense disambiguation in LLMs. We collect false friends in four language pairs, namely Indonesian-Malay, Indonesian-Tagalog, Chinese-Japanese, and English-German; and challenge LLMs to distinguish the use of them in context. In our analysis of various models, we observe they tend to be biased toward higher-resource languages. We also propose new metrics for quantifying the cross-lingual sense bias and comprehension based on our benchmark. Our work contributes to developing more diverse and inclusive language modeling, promoting fairer access for the wider multilingual community.
翻译:多语言大语言模型(LLMs)已获得显著关注,但人们对其在英语之外语言上的可靠性存在担忧。本研究通过引入一个用于跨语言词义消歧的新型基准测试——StingrayBench,来应对跨语言语义评估方面的空白。在本文中,我们利用假朋友词——即在两种语言中拼写相似但含义完全不同的词——作为一种可能的方法,来揭示LLMs在跨语言词义消歧方面的局限性。我们收集了四组语言对中的假朋友词,即印度尼西亚语-马来语、印度尼西亚语-他加禄语、汉语-日语以及英语-德语;并挑战LLMs在上下文中区分其用法。在对各种模型的分析中,我们观察到它们倾向于偏向高资源语言。我们还基于我们的基准测试提出了新的指标,用于量化跨语言词义偏见和理解能力。我们的工作有助于开发更多样化和包容性的语言建模,促进更广泛的多语言社区获得更公平的访问。