Large language models (LLMs) has experienced exponential growth, they demonstrate remarkable performance across various tasks. Notwithstanding, contemporary research primarily centers on enhancing the size and quality of pretraining data, still utilizing the next token prediction task on autoregressive transformer model structure. The efficacy of this task in truly facilitating the model's comprehension of code logic remains questionable, we speculate that it still interprets code as mere text, while human emphasizes the underlying logical knowledge. In order to prove it, we introduce a new task, "Logically Equivalent Code Selection," which necessitates the selection of logically equivalent code from a candidate set, given a query code. Our experimental findings indicate that current LLMs underperform in this task, since they understand code by unordered bag of keywords. To ameliorate their performance, we propose an advanced pretraining task, "Next Token Prediction+". This task aims to modify the sentence embedding distribution of the LLM without sacrificing its generative capabilities. Our experimental results reveal that following this pretraining, both Code Llama and StarCoder, the prevalent code domain pretraining models, display significant improvements on our logically equivalent code selection task and the code completion task.
翻译:大型语言模型(LLMs)经历了指数级增长,在各种任务中展现出卓越性能。然而,当前研究主要聚焦于提升预训练数据的规模与质量,仍沿用在自回归Transformer模型架构上的下一词预测任务。该任务能否真正促进模型对代码逻辑的理解仍存疑问——我们推测模型仍将代码视为纯文本,而人类更强调其背后的逻辑知识。为验证此假设,我们提出新任务"逻辑等价代码选择":给定查询代码后,需从候选集中选出逻辑等价的代码。实验表明,当前LLMs在此任务中表现欠佳,因其通过无序的关键词包理解代码。为改善其性能,我们提出增强型预训练任务"下一词预测+",旨在不牺牲模型生成能力的前提下调整其句子嵌入分布。实验结果显示,经过此预训练后,主流代码领域预训练模型Code Llama与StarCoder在逻辑等价代码选择任务及代码补全任务上均取得显著提升。