Despite the effectiveness of large language models (LLMs) for code generation, they often output incorrect code. One reason is that model output probabilities are often not well-correlated with correctness, and reflect only the final output of the generation process. Inspired by findings that LLMs internally encode concepts like truthfulness, this paper explores if LLMs similarly represent code correctness. Specifically, we identify a correctness representation inside LLMs by contrasting the hidden states between pairs of correct and incorrect code for the same programming tasks. By experimenting on four LLMs, we show that exploiting this extracted correctness representation outperforms standard log-likelihood ranking, as well as verbalized model confidence. Furthermore, we explore how this internal correctness signal can be used to select higher-quality code samples, without requiring test execution. Ultimately, this work demonstrates how leveraging internal representations can enhance code generation systems and make LLMs more reliable, thus improving confidence in automatically generated code.
翻译:尽管大型语言模型(LLMs)在代码生成方面表现出色,但其生成的代码常存在错误。原因之一是模型的输出概率往往与代码正确性关联较弱,且仅反映生成过程的最终输出。受LLMs内部编码诸如真实性等概念的发现启发,本文探究LLMs是否以类似方式表征代码正确性。具体而言,我们通过对比同一编程任务下正确与错误代码对的隐藏状态,识别出LLMs内部的正确性表征。通过对四种LLMs的实验表明,利用这一提取的正确性表征在性能上优于标准的对数似然排序方法及语言化的模型置信度。此外,我们探索了如何在不执行测试的情况下,利用这种内部正确性信号筛选更高质量的代码样本。本研究最终论证了利用内部表征能够增强代码生成系统,提升LLMs的可靠性,从而增强对自动生成代码的信任度。