We present SeaEval, a benchmark for multilingual foundation models. In addition to characterizing how these models understand and reason with natural language, we also investigate how well they comprehend cultural practices, nuances, and values. Alongside standard accuracy metrics, we investigate the brittleness of foundation models in the dimensions of semantics and multilinguality. Our analyses span both open-sourced and closed models, leading to empirical results across classic NLP tasks, reasoning, and cultural comprehension. Key findings indicate (1) Most models exhibit varied behavior when given paraphrased instructions. (2) Many models still suffer from exposure bias (e.g., positional bias, majority label bias). (3) For questions rooted in factual, scientific, and commonsense knowledge, consistent responses are expected across multilingual queries that are semantically equivalent. Yet, most models surprisingly demonstrate inconsistent performance on these queries. (4) Multilingually-trained models have not attained "balanced multilingual" capabilities. Our endeavors underscore the need for more generalizable semantic representations and enhanced multilingual contextualization. SeaEval can serve as a launchpad for more thorough investigations and evaluations for multilingual and multicultural scenarios.
翻译:我们提出了SeaEval,一个面向多语言基础模型的评估基准。除了刻画这些模型如何理解和运用自然语言进行推理之外,我们还探究了它们对文化习俗、细微差别和价值观的理解程度。除了标准准确率指标,我们还研究了基础模型在语义和多语言性维度上的脆弱性。我们的分析涵盖了开源和闭源模型,在经典自然语言处理任务、推理和文化理解方面得出了实证结果。主要发现表明:(1)大多数模型在接收到经过释义的指令时表现出不同的行为。(2)许多模型仍然受到暴露偏差的影响(例如,位置偏差、多数标签偏差)。(3)对于基于事实、科学和常识知识的问题,语义等价的多语言查询应得到一致的回应。然而,大多数模型在这些查询上的表现却出人意料地不一致。(4)经过多语言训练的模型尚未获得“平衡的多语言”能力。我们的工作凸显了需要更具泛化能力的语义表示和增强的多语言语境化能力。SeaEval可以作为一个起点,为多语言和多文化场景进行更深入的调查和评估。