Evaluating Large Language Models (LLMs) with respect to real-world code complexity is essential. Otherwise, there is a risk of overestimating LLMs' programming abilities based on simplistic benchmarks, only to be disappointed when using them in real-world settings. Recently, researchers explored the construction of more realistic benchmarks by mining or augmenting open-source repositories. Such solutions are usually task-specific. Data quality control from real-world projects can also be time-consuming and error-prone. More importantly, evaluating LLMs on fixed benchmark problems is subject to data contamination and overfitting. We propose GeneBench, an automated technique to add real-world complexities to any programming benchmark. GeneBench leverages a multi-objective optimization to increase the complexity of programming problems while maintaining the readability of code similar to real-world programs. Transforming four widely-used programming benchmarks using GeneBench and evaluating 13 LLMs (including two reasoning LLMs) on them shows a notable performance drop across all programming tasks (14.9%-60.5%, avg=35.2%), demonstrating LLMs' struggle under real-world complexities. The struggle persists even when LLMs are few-shot prompted or fine-tuned with examples from different versions of GeneBench, demonstrating the challenging nature of the problems. Finally, we show that the performance of the studied LLMs in bug repair is similar under GeneBench and SWE-Bench. This, along with the consistent reproduction of performance drop of all studied LLMs across four tasks under different versions of GeneBench, makes the technique suitable to evaluate LLMs without costly construction of real-world benchmarks.
翻译:针对真实世界代码复杂性评估大型语言模型(LLMs)至关重要。否则,仅基于简化基准可能高估LLMs的编程能力,而在实际应用场景中却令人失望。近期,研究者通过挖掘或增强开源仓库探索构建更真实的基准。此类方案通常具有任务特定性。来自真实项目的质量控制也可能耗时且易错。更重要的是,在固定基准问题上评估LLMs易受数据污染和过拟合影响。我们提出GeneBench——一种自动化技术,可为任意编程基准增添真实世界复杂性。GeneBench利用多目标优化增强编程问题复杂度,同时保持代码可读性以接近真实程序。通过对四个广泛使用的编程基准进行GeneBench转换,并评估13个LLMs(包含两个推理型LLMs),结果显示所有编程任务性能均出现显著下降(14.9%-60.5%,均值=35.2%),证明LLMs在真实世界复杂性下的应对困境。即使对LLMs进行少样本提示或使用GeneBench不同版本示例进行微调,这种困境依然存在,表明问题的挑战性本质。最后,我们证明所研究LLMs在错误修复任务中的性能在GeneBench与SWE-Bench基准下表现相似。结合所有研究LLMs在四个任务中、不同GeneBench版本下性能下降的一致性复现,表明该技术适用于评估LLMs,无需耗费高昂成本构建真实世界基准。