The adoption of Large Language Models (LLMs) across multiple contexts has sparked interest in understanding how scaling model size might lead to behavioral changes, as LLMs can exhibit behaviors not observed in their smaller counterparts. Understanding these emergent capabilities is essential for advancing LLM development and improving their interpretability across diverse tasks. However, whether LLMs exhibit true emergence in the context of Software Engineering remains an unexplored topic, as most research has focused on NLP tasks. In this paper, we investigate the emergence of capabilities in the context of SE. We propose a model-agnostic pipeline for evaluating this phenomenon across three SE tasks: bug fixing, code translation, and commit message generation. More precisely, for each task, we present a case study instantiating our pipeline to analyze the emergence of capabilities in CodeGen1-multi across four scales ranging from 350M to 16.1B parameters. Our findings do not not provide evidence to support the idea of emergent capabilities resulting from scaling the model size in the selected set of tasks. We hope our results can pave the way to a more nuanced understanding of emergent capabilities of LLMs within the SE domain, guiding future research to focus on task-specific evaluations and the identification of alternative factors contributing to this phenomenon. Our work underscores the importance of task diversity in examining model behaviors and highlights potential limitations in transferring prior understandings of and approaches to emergence from NLP to Software Engineering.
翻译:大语言模型(LLMs)在多个领域的应用引发了对其规模扩展如何导致行为变化的关注,因为LLMs可能展现出其较小版本中未观察到的行为。理解这些涌现能力对于推进LLM发展及提升其在多样化任务中的可解释性至关重要。然而,LLMs在软件工程背景下是否展现真正的涌现能力仍是一个未被探索的课题,因为现有研究大多集中于自然语言处理任务。本文研究了LLMs在软件工程领域中的能力涌现现象。我们提出了一种与模型无关的评估流程,用于在三个软件工程任务中检验这一现象:缺陷修复、代码翻译和提交信息生成。具体而言,针对每项任务,我们通过案例研究实例化该流程,以分析CodeGen1-multi模型在参数量从350M到16.1B的四个规模层级上的能力涌现情况。我们的研究结果并未为所选任务集中模型规模扩展导致涌现能力的观点提供证据。我们希望这些结果能够为更细致地理解LLMs在软件工程领域的涌现能力铺平道路,引导未来研究聚焦于任务特定评估及识别导致该现象的其他因素。本研究强调了任务多样性在检验模型行为中的重要性,并揭示了将自然语言处理领域对涌现现象的既有认知与方法迁移至软件工程领域时可能存在的局限性。