As large language models (LLMs) such as ChatGPT, Copilot, Claude, and Gemini become integrated into software development workflows, developers increasingly leave traces of AI involvement in their code comments. Among these, some comments explicitly acknowledge both the use of generative AI and the presence of technical shortcomings. Analyzing 6,540 LLM-referencing code comments from public Python and JavaScript-based GitHub repositories (November 2022-July 2025), we identified 81 that also self-admit technical debt(SATD). Developers most often describe postponed testing, incomplete adaptation, and limited understanding of AI-generated code, suggesting that AI assistance affects both when and why technical debt emerges. We term GenAI-Induced Self-admitted Technical debt (GIST) as a proposed conceptual lens to describe recurring cases where developers incorporate AI-generated code while explicitly expressing uncertainty about its behavior or correctness.
翻译:随着 ChatGPT、Copilot、Claude 和 Gemini 等大语言模型(LLMs)被集成到软件开发工作流中,开发者越来越多地在代码注释中留下人工智能参与的痕迹。其中,一些注释明确承认了生成式人工智能的使用以及技术缺陷的存在。通过分析来自公开的基于 Python 和 JavaScript 的 GitHub 仓库(2022年11月至2025年7月)的 6,540 条引用 LLM 的代码注释,我们识别出其中 81 条也自承认了技术债务(SATD)。开发者最常描述的是被推迟的测试、不完整的适配以及对 AI 生成代码的有限理解,这表明人工智能辅助既影响了技术债务出现的时间,也影响了其产生的原因。我们提出“生成式人工智能引发的自承认技术债务”(GIST)这一概念视角,用以描述一类反复出现的情况:开发者在引入 AI 生成的代码时,明确表达对其行为或正确性的不确定性。