Existing approaches to mathematical reasoning with large language models (LLMs) rely on Chain-of-Thought (CoT) for generalizability or Tool-Integrated Reasoning (TIR) for precise computation. While efforts have been made to combine these methods, they primarily rely on post-selection or predefined strategies, leaving an open question: whether LLMs can autonomously adapt their reasoning strategy based on their inherent capabilities. In this work, we propose TATA (Teaching LLMs According to Their Aptitude), an adaptive framework that enables LLMs to personalize their reasoning strategy spontaneously, aligning it with their intrinsic aptitude. TATA incorporates base-LLM-aware data selection during supervised fine-tuning (SFT) to tailor training data to the model's unique abilities. This approach equips LLMs to autonomously determine and apply the appropriate reasoning strategy at test time. We evaluate TATA through extensive experiments on six mathematical reasoning benchmarks, using both general-purpose and math-specialized LLMs. Empirical results demonstrate that TATA effectively combines the complementary strengths of CoT and TIR, achieving superior or comparable performance with improved inference efficiency compared to TIR alone. Further analysis underscores the critical role of aptitude-aware data selection in enabling LLMs to make effective and adaptive reasoning decisions and align reasoning strategies with model capabilities.
翻译:现有基于大语言模型(LLMs)的数学推理方法主要依赖思维链(CoT)实现泛化能力,或借助工具集成推理(TIR)进行精确计算。尽管已有研究尝试结合这两种方法,但它们主要依赖于后验选择或预定义策略,从而遗留了一个开放性问题:LLMs能否根据其内在能力自主适应推理策略。本研究提出TATA(因材施教)框架,该自适应框架使LLMs能够自发个性化其推理策略,使其与模型内在禀赋相匹配。TATA在监督微调阶段引入基于基础LLM能力感知的数据选择机制,从而根据模型独特能力定制训练数据。该方法使LLMs在测试阶段能自主决定并应用合适的推理策略。我们在六个数学推理基准上使用通用型和数学专用型LLMs进行了广泛实验评估。实证结果表明,TATA有效融合了CoT与TIR的互补优势,在保持与单独使用TIR相当或更优性能的同时,显著提升了推理效率。进一步分析揭示了能力感知数据选择对LLMs作出有效自适应推理决策、并使推理策略与模型能力相匹配的关键作用。