Large Language Models (LLMs) are increasingly integrated into software engineering workflows, yet current benchmarks provide only coarse performance summaries that obscure the diverse capabilities and limitations of these models. This paper investigates whether LLMs' code-comprehension performance aligns with traditional human-centric software metrics or instead reflects distinct, non-human regularities. We introduce a diagnostic framework that reframes code understanding as a binary input-output consistency task, enabling the evaluation of classification and generative models. Using a large-scale dataset, we correlate model performance with traditional, human-centric complexity metrics, such as lexical size, control-flow complexity, and abstract syntax tree structure. Our analyses reveal minimal correlation between human-defined metrics and LLM success (AUROC 0.63), while shadow models achieve substantially higher predictive performance (AUROC 0.86), capturing complex, partially predictable patterns beyond traditional software measures. These findings suggest that LLM comprehension reflects model-specific regularities only partially accessible through either human-designed or learned features, emphasizing the need for benchmark methodologies that move beyond aggregate accuracy and toward instance-level diagnostics, while acknowledging fundamental limits in predicting correct outcomes.
翻译:大型语言模型(LLM)正日益融入软件工程工作流,然而当前的基准测试仅提供粗略的性能总结,掩盖了这些模型多样化的能力与局限。本文探究LLM的代码理解性能是否与传统的以人为中心的软件度量标准相一致,抑或反映了独特的、非人类的规律性。我们引入一个诊断框架,将代码理解重新定义为二元输入-输出一致性任务,从而能够评估分类模型与生成模型。利用大规模数据集,我们将模型性能与传统的、以人为中心的复杂度度量(如词汇规模、控制流复杂度和抽象语法树结构)相关联。我们的分析表明,人为定义的度量标准与LLM的成功之间相关性极低(AUROC 0.63),而影子模型则实现了显著更高的预测性能(AUROC 0.86),捕捉到了超越传统软件度量的复杂且部分可预测的模式。这些发现表明,LLM的理解能力反映了模型特有的规律性,这些规律仅能通过人为设计或学习得到的特征部分地获取,这强调了基准测试方法需要超越总体准确率,转向实例级诊断,同时承认预测正确结果存在根本性局限。