Convex analysis is a modern branch of mathematics with many applications. As Large Language Models (LLMs) start to automate research-level math and sciences, it is important for LLMs to demonstrate the ability to understand and reason with convexity. We introduce \cb, a scalable and mechanically verifiable benchmark for testing \textit{whether LLMs can identify the convexity of a symbolic objective under deep functional composition.} Experiments on frontier LLMs reveal a sharp compositional reasoning gap: performance degrades rapidly with increasing depth, dropping from an F1-score of $1.0$ at depth $2$ to approximately $0.2$ at depth $100$. Inspection of models' reasoning traces indicates two failure modes: \textit{parsing failure} and \textit{lazy reasoning}. To address these limitations, we propose an agentic divide-and-conquer framework that (i) offloads parsing to an external tool to construct an abstract syntax tree (AST) and (ii) enforces recursive reasoning over each intermediate sub-expression with focused context. This framework reliably mitigates deep-composition failures, achieving substantial performance improvement at large depths (e.g., F1-Score $= 1.0$ at depth $100$).
翻译:凸分析是现代数学的一个重要分支,具有广泛的应用。随着大型语言模型(LLMs)开始自动化研究级数学与科学任务,LLMs 展现理解和推理凸性的能力至关重要。我们提出了 \cb,一个可扩展且可机械验证的基准测试,用于检验 \textit{LLMs 能否识别深度函数复合下符号化目标的凸性。} 对前沿 LLMs 的实验揭示了一个显著的复合推理缺陷:随着复合深度的增加,模型性能迅速下降,F1 分数从深度 $2$ 时的 $1.0$ 降至深度 $100$ 时的约 $0.2$。对模型推理轨迹的分析指出了两种失败模式:\textit{解析失败} 与 \textit{惰性推理}。为解决这些局限,我们提出了一种基于智能体分工协作的框架,该框架(i)将解析任务卸载给外部工具以构建抽象语法树(AST),并(ii)强制模型在聚焦的上下文中对每个中间子表达式进行递归推理。此框架能可靠地缓解深度复合导致的失败,在较大深度下实现了显著的性能提升(例如,在深度 $100$ 时 F1 分数 $= 1.0$)。