Large language models (LLMs) with Chain-of-thought (CoT) have recently emerged as a powerful technique for eliciting reasoning to improve various downstream tasks. As most research mainly focuses on English, with few explorations in a multilingual context, the question of how reliable this reasoning capability is in different languages is still open. To address it directly, we study multilingual reasoning consistency across multiple languages, using popular open-source LLMs. First, we compile the first large-scale multilingual math reasoning dataset, mCoT-MATH, covering eleven diverse languages. Then, we introduce multilingual CoT instruction tuning to boost reasoning capability across languages, thereby improving model consistency. While existing LLMs show substantial variation across the languages we consider, and especially low performance for lesser resourced languages, our 7B parameter model mCoT achieves impressive consistency across languages, and superior or comparable performance to close- and open-source models even of much larger sizes.
翻译:基于思维链(CoT)的大语言模型(LLMs)已成为激发推理能力以提升各类下游任务性能的重要技术。当前研究主要集中于英语场景,在多语言环境中的探索相对有限,因此该推理能力在不同语言中的可靠性仍待探究。为直接解决这一问题,我们基于主流开源LLMs研究了跨多种语言的推理一致性。首先,我们构建了首个大规模多语言数学推理数据集mCoT-MATH,涵盖十一种不同语言。随后,我们提出多语言CoT指令微调方法以增强跨语言推理能力,从而提升模型一致性。实验表明,现有LLMs在各类语言中表现存在显著差异,尤其在资源稀缺语言上性能较低;而我们的70亿参数模型mCoT在跨语言场景中展现出优异的一致性,其性能甚至优于或媲美规模更大的闭源与开源模型。