Often, a good explanation for a program's unexpected behavior is a bug in the programmer's code. But sometimes, an even better explanation is a bug in the programmer's mental model of the language or API they are using. Instead of merely debugging our current code ("giving the programmer a fish"), what if our tools could directly debug our mental models ("teaching the programmer to fish")? In this paper, we apply recent ideas from computational cognitive science to offer a principled framework for doing exactly that. Given a "why?" question about a program, we automatically infer potential misconceptions about the language/API that might cause the user to be surprised by the program's behavior -- and then analyze those misconceptions to provide explanations of the program's behavior. Our key idea is to formally represent misconceptions as counterfactual (erroneous) semantics for the language/API, which can be inferred and debugged using program synthesis techniques. We demonstrate our framework, WatChat, by building systems for explanation in two domains: JavaScript type coercion, and the Git version control system. We evaluate WatChatJS and WatChatGit by comparing their outputs to experimentally-collected human-written explanations in these two domains: we show that WatChat's explanations exhibit key features of human-written explanation, unlike those of a state-of-the-art language model.
翻译:通常,程序出现意外行为的一个合理解释是程序员代码中存在错误。但有时,更好的解释可能是程序员对其所用编程语言或API的心智模型存在缺陷。与其仅调试当前代码("授人以鱼"),如果我们的工具能直接调试心智模型("授人以渔")会怎样?本文应用计算认知科学的最新思想,提出了实现这一目标的原理性框架。针对程序提出的"为什么?"类问题,我们自动推断可能导致用户对程序行为感到惊讶的潜在语言/API误解,进而分析这些误解以提供程序行为解释。我们的核心思想是将误解形式化表示为语言/API的反事实(错误)语义,并利用程序合成技术进行推断和调试。我们通过构建两个领域的解释系统来展示WatChat框架:JavaScript类型强制转换和Git版本控制系统。通过将WatChatJS和WatChatGit的输出与实验收集的这两个领域的人工撰写解释进行对比评估,我们证明WatChat生成的解释具有人工解释的关键特征,而现有最先进的语言模型则不具备这些特征。