Large language models (LLMs) have reshaped the landscape of program synthesis. However, contemporary LLM-based code completion systems often hallucinate broken code because they lack appropriate context, particularly when working with definitions not in the training data nor near the cursor. This paper demonstrates that tight integration with the type and binding structure of a language, as exposed by its language server, can address this contextualization problem in a token-efficient manner. In short, we contend that AIs need IDEs, too! In particular, we integrate LLM code generation into the Hazel live program sketching environment. The Hazel Language Server identifies the type and typing context of the hole being filled, even in the presence of errors, ensuring that a meaningful program sketch is always available. This allows prompting with codebase-wide contextual information not lexically local to the cursor, nor necessarily in the same file, but that is likely to be semantically local to the developer's goal. Completions synthesized by the LLM are then iteratively refined via further dialog with the language server. To evaluate these techniques, we introduce MVUBench, a dataset of model-view-update (MVU) web applications. These applications serve as challenge problems due to their reliance on application-specific data structures. We find that contextualization with type definitions is particularly impactful. After introducing our ideas in the context of Hazel we duplicate our techniques and port MVUBench to TypeScript in order to validate the applicability of these methods to higher-resource languages. Finally, we outline ChatLSP, a conservative extension to the Language Server Protocol (LSP) that language servers can implement to expose capabilities that AI code completion systems of various designs can use to incorporate static context when generating prompts for an LLM.
翻译:大型语言模型(LLMs)重塑了程序合成的格局。然而,当代基于LLM的代码补全系统常因缺乏适当上下文而产生错误代码,尤其是在处理训练数据中不存在或光标附近未定义的内容时。本文证明,通过与语言服务器提供的类型及绑定结构紧密集成,能以令牌高效的方式解决此上下文化问题。简言之,我们认为AI同样需要集成开发环境(IDEs)!具体而言,我们将LLM代码生成集成至Hazel实时程序草图绘制环境中。Hazel语言服务器能识别待填充空位的类型及类型化上下文(即使在存在错误的情况下),确保始终可获得有意义的程序草图。这使得提示信息可包含代码库范围内、非光标词法局部且不一定位于同一文件,但可能与开发者目标语义局部相关的上下文信息。随后,通过与语言服务器的进一步对话,对LLM合成的补全内容进行迭代优化。为评估这些技术,我们引入了MVUBench——一个模型-视图-更新(MVU)网络应用程序数据集。这些应用程序因其对特定应用数据结构的依赖而成为挑战性问题。我们发现,利用类型定义进行上下文化具有显著影响。在Hazel环境中阐述我们的构想后,我们复现了相关技术并将MVUBench移植至TypeScript,以验证这些方法在更高资源语言中的适用性。最后,我们提出ChatLSP——这是对语言服务器协议(LSP)的保守扩展,语言服务器可通过实现该协议,为各类设计的AI代码补全系统提供能力支持,使其在为LLM生成提示时能融入静态上下文。