Large Language Models (LLMs) increasingly exhibit strong reasoning abilities, often attributed to their capacity to generate chain-of-thought-style intermediate reasoning. Recent work suggests that exposure to code can further enhance these skills, but existing studies largely treat code as a generic training signal, leaving open the question of which properties of code actually contribute to improved reasoning. To address this gap, we study the structural complexity of code, which captures control flow and compositional structure that may shape how models internalise multi-step reasoning during fine-tuning. We examine two complementary settings: solution-driven complexity, where complexity varies across multiple solutions to the same problem, and problem-driven complexity, where complexity reflects variation in the underlying tasks. Using cyclomatic complexity and logical lines of code to construct controlled fine-tuning datasets, we evaluate a range of open-weight LLMs on diverse reasoning benchmarks. Our findings show that although code can improve reasoning, structural properties strongly determine its usefulness. In 83% of experiments, restricting fine-tuning data to a specific structural complexity range outperforms training on structurally diverse code, pointing to a data-centric path for improving reasoning beyond scaling.
翻译:大型语言模型(LLMs)日益展现出强大的推理能力,这通常归因于其生成思维链式中间推理的能力。近期研究表明,接触代码可以进一步提升这些技能,但现有研究大多将代码视为通用训练信号,尚未解答代码的哪些特性实际有助于提升推理能力。为填补这一空白,我们研究了代码的结构复杂度——这种复杂度捕捉了控制流和组合结构,可能影响模型在微调过程中内化多步推理的方式。我们考察了两种互补场景:解决方案驱动的复杂度(同一问题的不同解决方案呈现复杂度差异)和问题驱动的复杂度(复杂度反映底层任务的变化)。通过采用圈复杂度和逻辑代码行数构建受控微调数据集,我们在多样化推理基准上评估了一系列开源权重的大型语言模型。研究结果表明,虽然代码能提升推理能力,但其结构特性强烈决定了代码的有效性。在83%的实验中,将微调数据限制在特定结构复杂度范围内的表现优于使用结构多样化代码的训练,这为超越规模扩展、通过数据中心化方法提升推理能力指明了路径。