Bridging the significant gap between large language model's English and non-English performance presents a great challenge. While some previous studies attempt to mitigate this gap with translated training data, the recently proposed question alignment framework leverages the model's English expertise to improve multilingual performance with minimum usage of expensive, error-prone translation. In this paper, we explore how broadly this method can be applied by examining its effects in reasoning with and without chain-of-thought, as well as with program-of-thought. We also explore applying this framework to extremely large language models in an efficient manner, such as through proxy-tuning. Experiment results on multilingual reasoning benchmarks mGSM, mSVAMP, xCSQA and xNLI demonstrate that we can extend question alignment framework to boost multilingual performance across diverse reasoning scenarios, model families, and sizes. For instance, when applied to the LLaMA2 models, it brings an average accuracy improvements of 12.2% on mGSM even with the 70B model. To understand the mechanism of its success, we analyze representation space, generated response and data scales, and reveal how question translation training strengthens language alignment within LLMs and shapes their working patterns.
翻译:弥合大型语言模型在英语与非英语表现之间的显著差距是一项重大挑战。尽管先前一些研究尝试通过翻译训练数据来缓解这一差距,但最近提出的问题对齐框架以最小化使用昂贵且易出错的翻译为代价,利用模型的英语专业知识来提升多语言性能。本文通过考察该方法在含思维链、不含思维链以及程序思维推理中的效果,探讨了其适用范围的广度。我们还探索了如何以高效方式(例如通过代理调优)将该框架应用于极大型语言模型。在多语言推理基准mGSM、mSVAMP、xCSQA和xNLI上的实验结果表明,我们能够扩展问题对齐框架以提升不同推理场景、模型家族和规模下的多语言性能。例如,当应用于LLaMA2模型时,即使在700亿参数模型上也能为mGSM带来平均12.2%的准确率提升。为理解其成功机制,我们分析了表示空间、生成响应和数据规模,揭示了问题翻译训练如何加强大型语言模型内部的语言对齐并塑造其工作模式。