Large Language Models (LLMs) have demonstrated remarkable instruction-following capabilities across various applications. However, their performance in multilingual settings remains poorly understood, as existing evaluations lack fine-grained constraint analysis. We introduce XIFBench, a comprehensive constraint-based benchmark for assessing multilingual instruction-following abilities of LLMs, featuring a novel taxonomy of five constraint categories and 465 parallel instructions across six languages spanning different resource levels. To ensure consistent cross-lingual evaluation, we develop a requirement-based protocol that leverages English requirements as semantic anchors. These requirements are then used to validate the translations across languages. Extensive experiments with various LLMs reveal notable variations in instruction-following performance across resource levels, identifying key influencing factors such as constraint categories, instruction complexity, and cultural specificity.
翻译:大语言模型(LLMs)已在多种应用中展现出卓越的指令遵循能力。然而,由于现有评估缺乏细粒度的约束分析,其在多语言环境下的性能表现仍不甚明了。本文提出XIFBench,这是一个基于约束的综合基准测试,用于评估LLMs的多语言指令遵循能力。该基准包含一个新颖的五类约束分类法,以及涵盖六个不同资源水平语言的465条平行指令。为确保跨语言评估的一致性,我们开发了一种基于需求的评估协议,该协议利用英语需求作为语义锚点。这些需求随后被用于验证跨语言的翻译结果。通过对多种LLMs进行广泛实验,我们发现指令遵循性能在不同资源水平间存在显著差异,并识别出关键影响因素,如约束类别、指令复杂性和文化特异性。