Large language models (LLMs) have recently demonstrated a remarkable ability to generate code from natural language (NL) prompts. However, in the real world, NL is often too ambiguous to capture the true intent behind programming problems, requiring additional input-output (I/O) specifications. Unfortunately, LLMs can have difficulty aligning their outputs with both the NL prompt and the I/O specification. In this paper, we give a way to mitigate this issue in the context of data science programming, where tasks require explicit I/O specifications for clarity. Specifically, we propose GIFT4Code, a novel approach for the instruction fine-tuning of LLMs with respect to I/O specifications. Our method leverages synthetic data produced by the LLM itself and utilizes execution-derived feedback as a key learning signal. This feedback, in the form of program I/O specifications, is provided to the LLM to facilitate instruction fine-tuning. We evaluated our approach on two challenging data science benchmarks, Arcade and DS-1000. The results demonstrate a significant improvement in the LLM's ability to generate code that is not only executable but also accurately aligned with user specifications, substantially improving the quality of code generation for complex data science tasks.
翻译:大语言模型(LLMs)近期展现出从自然语言提示生成代码的卓越能力。然而在现实场景中,自然语言往往难以准确捕捉编程问题的真实意图,需要额外的输入-输出(I/O)规范作为补充。遗憾的是,LLMs在同时对齐自然语言提示与I/O规范时存在困难。本文针对数据科学编程场景提出解决方案——该领域任务需要明确的I/O规范来保证清晰性。具体而言,我们提出GIFT4Code这一创新方法,通过对I/O规范进行指令微调来优化LLMs。该方法利用LLM自身生成的合成数据,并将基于执行结果的反馈作为关键学习信号。这种以程序I/O规范形式呈现的反馈,被提供给LLM以促进指令微调。我们在Arcade和DS-1000这两个具有挑战性的数据科学基准上进行了评估,结果表明该方法显著提升了LLM生成代码的可执行性及其与用户规范的对齐精度,有效提高了复杂数据科学任务的代码生成质量。