In-Context Learning (ICL) is an essential emergent ability of Large Language Models (LLMs), and recent studies introduce Chain-of-Thought (CoT) to exemplars of ICL to enhance the reasoning capability, especially in mathematics tasks. However, given the continuous advancement of model capabilities, it remains unclear whether CoT exemplars still benefit recent, stronger models in such tasks. Through systematic experiments, we find that for recent strong models such as the Qwen2.5 series, adding traditional CoT exemplars does not improve reasoning performance compared to Zero-Shot CoT. Instead, their primary function is to align the output format with human expectations. We further investigate the effectiveness of enhanced CoT exemplars, constructed using answers from advanced models such as \texttt{Qwen2.5-Max} and \texttt{DeepSeek-R1}. Experimental results indicate that these enhanced exemplars still fail to improve the model's reasoning performance. Further analysis reveals that models tend to ignore the exemplars and focus primarily on the instructions, leading to no observable gain in reasoning ability. Overall, our findings highlight the limitations of the current ICL+CoT framework in mathematical reasoning, calling for a re-examination of the ICL paradigm and the definition of exemplars.
翻译:上下文学习是大语言模型的一项关键涌现能力,近期研究将思维链引入ICL示例中以增强推理能力,尤其在数学任务中。然而,随着模型能力的持续进步,思维链示例是否仍对近期更强的模型在此类任务中有所裨益尚不明确。通过系统实验,我们发现对于Qwen2.5系列等近期强模型,相比零样本思维链,添加传统思维链示例并不能提升推理性能,其主要作用仅在于使输出格式符合人类预期。我们进一步研究了使用\texttt{Qwen2.5-Max}和\texttt{DeepSeek-R1}等先进模型答案构建的增强型思维链示例的有效性。实验结果表明,这些增强示例仍无法提升模型的推理性能。进一步分析揭示,模型倾向于忽略示例而主要关注指令,导致推理能力未见提升。总体而言,我们的研究结果凸显了当前ICL+思维链框架在数学推理中的局限性,呼吁重新审视ICL范式及示例的定义方式。