Large Language Models (LLMs) have demonstrated promising capabilities in solving mathematical reasoning tasks, leveraging Chain-of-Thought (CoT) data as a vital component in guiding answer generation. Current paradigms typically generate CoT and answers directly for a given problem, diverging from human problem-solving strategies to some extent. Humans often solve problems by recalling analogous cases and leveraging their solutions to reason about the current task. Inspired by this cognitive process, we propose \textbf{MetaLadder}, a novel framework that explicitly prompts LLMs to recall and reflect on meta-problems, those structurally or semantically analogous problems, alongside their CoT solutions before addressing the target problem. Additionally, we introduce a problem-restating mechanism to enhance the model's comprehension of the target problem by regenerating the original question, which further improves reasoning accuracy. Therefore, the model can achieve reasoning transfer from analogical problems, mimicking human-like "learning from examples" and generalization abilities. Extensive experiments on mathematical benchmarks demonstrate that our MetaLadder significantly boosts LLMs' problem-solving accuracy, largely outperforming standard CoT-based methods (\textbf{10.3\%} accuracy gain) and other methods. Our code and data has been released at https://github.com/LHL3341/MetaLadder.
翻译:大型语言模型(LLMs)在解决数学推理任务中展现出良好潜力,其通常将思维链(CoT)数据作为引导答案生成的关键组成部分。现有范式通常直接为给定问题生成CoT与答案,这在某种程度上偏离了人类的问题解决策略。人类在解题时往往会回忆类似案例,并借助其解决方案来推理当前任务。受此认知过程启发,我们提出**MetaLadder**——一个新颖的框架,该框架在求解目标问题前,显式提示LLMs回忆并反思元问题(即结构或语义上类似的问题)及其对应的CoT解。此外,我们引入问题重述机制,通过重新生成原始问题来增强模型对目标问题的理解,从而进一步提升推理准确性。因此,模型能够实现从类比问题到目标问题的推理迁移,模拟人类“从示例中学习”的泛化能力。在多个数学基准测试上的大量实验表明,我们的MetaLadder显著提升了LLMs的解题准确率,大幅优于基于标准CoT的方法(**10.3%** 准确率提升)及其他方法。我们的代码与数据已发布于 https://github.com/LHL3341/MetaLadder。