Code readability is an important indicator of software maintenance as it can significantly impact maintenance efforts. Recently, LLM (large language models) have been utilized for code readability evaluation. However, readability evaluation differs among developers, so personalization of the evaluation by LLM is needed. This study proposes a method which calibrates the evaluation, using collaborative filtering. Our preliminary analysis suggested that the method effectively enhances the accuracy of the readability evaluation using LLMs.
翻译:代码可读性是衡量软件维护质量的重要指标,因其能显著影响维护工作量。近年来,大型语言模型(LLM)已被应用于代码可读性评估。然而,不同开发者对可读性的评价标准存在差异,因此需要实现LLM评估的个性化。本研究提出一种基于协同过滤的评估校准方法。初步分析表明,该方法能有效提升LLM在代码可读性评估中的准确性。