With the rapid development of Large Language Models (LLMs), a large number of machine learning models have been developed to assist programming tasks including the generation of program code from natural language input. However, how to evaluate such LLMs for this task is still an open problem despite of the great amount of research efforts that have been made and reported to evaluate and compare them. This paper provides a critical review of the existing work on the testing and evaluation of these tools with a focus on two key aspects: the benchmarks and the metrics used in the evaluations. Based on the review, further research directions are discussed.
翻译:随着大型语言模型(LLMs)的快速发展,大量机器学习模型被开发出来以辅助编程任务,包括从自然语言输入生成程序代码。然而,尽管已有大量研究致力于评估和比较这些模型,如何针对此任务评估此类LLMs仍是一个悬而未决的问题。本文对现有关于这些工具的测试与评估工作进行了批判性综述,重点关注评估中使用的基准和度量两个方面。基于此综述,本文进一步讨论了未来的研究方向。