Large language models (LLMs) such as ChatGPT have shown remarkable capabilities in code generation. Despite the great achievement, they rely on enormous training data to acquire a broad spectrum of open-domain knowledge. Besides, their evaluation revolves around open-domain benchmarks like HumanEval, which primarily consist of programming contests. Therefore, it is hard to fully characterize the intricacies and challenges associated with particular domains (e.g., web, game, and math). In this paper, we conduct an in-depth study of the LLMs in domain-specific code generation. Our results demonstrate that LLMs exhibit sub-optimal performance in generating domain-specific code, due to their limited proficiency in utilizing domain-specific libraries. We further observe that incorporating API knowledge as prompts can empower LLMs to generate more professional code. Based on these findings, we further investigate how to efficiently incorporate API knowledge into the code generation process. We experiment with three strategies for incorporating domain knowledge, namely, external knowledge inquirer, chain-of-thought prompting, and chain-of-thought fine-tuning. We refer to these strategies as a new code generation approach called DomCoder. Experimental results show that all strategies of DomCoder lead to improvement in the effectiveness of domain-specific code generation under certain settings. The results also show that there is still ample room for further improvement, based on which we suggest possible future works.
翻译:大语言模型(如ChatGPT)在代码生成方面展现出卓越能力。尽管取得巨大成就,但这些模型依赖海量训练数据获取广泛的开放域知识。此外,其评估主要围绕HumanEval等开放域基准进行,而这些基准主要由编程竞赛题目构成。因此,很难全面刻画特定领域(如Web开发、游戏开发和数学计算)的复杂性与挑战。本文对大语言模型在领域特定代码生成中的表现进行了深入研究。结果表明,大语言模型因领域库使用能力有限,在生成领域特定代码时表现欠佳。我们进一步观察到,将应用程序编程接口(API)知识作为提示纳入后,可增强大语言模型生成更专业代码的能力。基于这些发现,我们进一步探究如何高效地将API知识融入代码生成过程。我们实验了三种领域知识融入策略:外部知识查询器、思维链提示与思维链微调,并将这些策略统称为新型代码生成方法DomCoder。实验结果表明,在特定设置下,DomCoder的所有策略均能提升领域特定代码生成的效能。研究结果同时显示,该方法仍有较大改进空间,我们据此提出了可能的未来研究方向。