As generative AI is expected to increase global code volumes, the importance of maintainability from a human perspective will become even greater. Various methods have been developed to identify the most important maintainability issues, including aggregated metrics and advanced Machine Learning (ML) models. This study benchmarks several maintainability prediction approaches, including State-of-the-Art (SotA) ML, SonarQube's Maintainability Rating, CodeScene's Code Health, and Microsoft's Maintainability Index. Our results indicate that CodeScene matches the accuracy of SotA ML and outperforms the average human expert. Importantly, unlike SotA ML, CodeScene also provides end users with actionable code smell details to remedy identified issues. Finally, caution is advised with SonarQube due to its tendency to generate many false positives. Unfortunately, our findings call into question the validity of previous studies that solely relied on SonarQube output for establishing ground truth labels. To improve reliability in future maintainability and technical debt studies, we recommend employing more accurate metrics. Moreover, reevaluating previous findings with Code Health would mitigate this revealed validity threat.
翻译:随着生成式人工智能预计将增加全球代码量,从人类视角出发的可维护性重要性将变得更为突出。已有多种方法被开发用于识别最重要的可维护性问题,包括聚合度量指标和先进的机器学习模型。本研究对多种可维护性预测方法进行了基准测试,包括最先进的机器学习模型、SonarQube的可维护性评级、CodeScene的代码健康度以及微软的可维护性指数。我们的结果表明,CodeScene的准确性与最先进的机器学习模型相当,且优于人类专家平均水平。重要的是,与最先进的机器学习模型不同,CodeScene还能为终端用户提供可操作的代码异味细节以修复已识别的问题。最后,建议对SonarQube保持谨慎,因其倾向于产生大量误报。遗憾的是,我们的发现对先前仅依赖SonarQube输出来建立基准标签的研究有效性提出了质疑。为提高未来可维护性与技术债务研究的可靠性,我们建议采用更精确的度量指标。此外,使用代码健康度重新评估先前的研究发现将有助于缓解这一揭示出的有效性威胁。