Machine learning models trained on code and related artifacts offer valuable support for software maintenance but suffer from interpretability issues due to their complex internal variables. These concerns are particularly significant in safety-critical applications where the models' decision-making processes must be reliable. The specific features and representations learned by these models remain unclear, adding to the hesitancy in adopting them widely. To address these challenges, we introduce DeepCodeProbe, a probing approach that examines the syntax and representation learning abilities of ML models designed for software maintenance tasks. Our study applies DeepCodeProbe to state-of-the-art models for code clone detection, code summarization, and comment generation. Findings reveal that while small models capture abstract syntactic representations, their ability to fully grasp programming language syntax is limited. Increasing model capacity improves syntax learning but introduces trade-offs such as increased training time and overfitting. DeepCodeProbe also identifies specific code patterns the models learn from their training data. Additionally, we provide best practices for training models on code to enhance performance and interpretability, supported by an open-source replication package for broader application of DeepCodeProbe in interpreting other code-related models.
翻译:在代码及相关制品上训练的机器学习模型为软件维护提供了有力支持,但由于其内部变量复杂,存在可解释性问题。在安全关键型应用中,模型的决策过程必须可靠,这些问题尤为重要。这些模型具体学习到的特征和表征仍不明确,进一步加剧了广泛采用它们的犹豫。为应对这些挑战,我们提出DeepCodeProbe——一种探测方法,用于检验为软件维护任务设计的机器学习模型的语法与表征学习能力。本研究将DeepCodeProbe应用于代码克隆检测、代码摘要和注释生成的最新模型。研究发现,小型模型虽能捕捉抽象语法表征,但其完全掌握编程语言语法的能力有限。增加模型容量可提升语法学习效果,但会带来训练时间增长和过拟合等权衡问题。DeepCodeProbe还能识别模型从训练数据中学到的特定代码模式。此外,我们提供了在代码上训练模型以提升性能和可解释性的最佳实践,并附有开源复现包,以促进DeepCodeProbe在解释其他代码相关模型中的广泛应用。