Effective code documentation is essential for collaboration, comprehension, and long-term software maintainability, yet developers often neglect it due to its repetitive nature. Automated documentation generation has evolved from heuristic and rule-based methods to neural network-based and large language model (LLM)-based approaches. However, existing methods often overlook structural and quantitative characteristics of code that influence readability and comprehension. Prior research suggests that code metrics capture information relevant to program understanding. Building on these insights, this paper investigates the role of source code metrics as auxiliary signals for automated documentation generation, focusing on computational notebooks, a popular medium among data scientists that integrates code, narrative, and results but suffers from inconsistent documentation. We propose a two-stage approach. First, the CodeSearchNet dataset construction process was refined to create a specialized dataset from over 17 million code and markdown cells. After structural and semantic filtering, approximately 36,734 high-quality (code, markdown) pairs were extracted. Second, two modeling paradigms, a lightweight CNN-RNN architecture and a few-shot GPT-3.5 architecture, were evaluated with and without metric information. Results show that incorporating code metrics improves the accuracy and contextual relevance of generated documentation, yielding gains of 6% in BLEU-1 and 3% in ROUGE-L F1 for CNN-RNN-based architecture, and 9% in BERTScore F1 for LLM-based architecture. These findings demonstrate that integrating code metrics provides valuable structural context, enhancing automated documentation generation across diverse model families.
翻译:有效的代码文档对于协作、理解和长期软件可维护性至关重要,但开发者常因其重复性而忽视它。自动化文档生成已从启发式和基于规则的方法,发展到基于神经网络和大型语言模型(LLM)的方法。然而,现有方法往往忽略了影响代码可读性和理解性的结构性与定量特征。先前研究表明,代码度量指标捕获了与程序理解相关的信息。基于这些见解,本文研究了源代码度量指标作为自动化文档生成辅助信号的作用,重点关注计算笔记本——一种在数据科学家中流行的媒介,它集成了代码、叙述和结果,但存在文档不一致的问题。我们提出了一种两阶段方法。首先,改进了CodeSearchNet数据集构建流程,从超过1700万个代码和Markdown单元格中创建了一个专用数据集。经过结构和语义过滤后,提取了约36,734个高质量的(代码,Markdown)对。其次,评估了两种建模范式:轻量级CNN-RNN架构和少样本GPT-3.5架构,分别在有和无度量信息的情况下进行测试。结果表明,融入代码度量指标提高了生成文档的准确性和上下文相关性,使基于CNN-RNN架构的模型在BLEU-1上提升了6%,在ROUGE-L F1上提升了3%;使基于LLM架构的模型在BERTScore F1上提升了9%。这些发现表明,集成代码度量指标提供了有价值的结构上下文,增强了跨不同模型家族的自动化文档生成能力。