Architectural tactics (ATs), as the concrete implementation of architectural decisions in code, address non-functional requirements of software systems. Due to the implicit nature of architectural knowledge in code implementation, developers may risk inadvertently altering or removing these tactics during code modifications or optimizations. Such unintended changes can trigger architectural erosion, gradually undermining the system's original design. While many researchers have proposed machine learning-based methods to improve the accuracy of detecting ATs in code, the black-box nature and the required architectural domain knowledge pose significant challenges for developers in verifying the results. Effective verification requires not only accurate detection results but also interpretable explanations that enhance their comprehensibility. However, this is a critical gap in current research. Large language models (LLMs) can generate easily interpretable ATs detection comments if they have domain knowledge. Fine-tuning LLMs to acquire domain knowledge faces challenges such as catastrophic forgetting and hardware constraints. Thus, we propose Prmt4TD, a small model-augmented prompting framework to enhance the accuracy and comprehensibility of ATs detection. Combining fine-tuned small models with In-Context Learning can also reduce fine-tuning costs while equipping the LLM with additional domain knowledge. Prmt4TD can leverage the remarkable processing and reasoning capabilities of LLMs to generate easily interpretable ATs detection results. Our evaluation results demonstrate that Prmt4TD achieves accuracy (\emph{F1-score}) improvement of 13\%-23\% on the ATs balanced dataset and enhances the comprehensibility of the detection results.
翻译:架构策略(ATs)作为架构决策在代码中的具体实现,旨在解决软件系统的非功能性需求。由于架构知识在代码实现中具有隐含性,开发人员在代码修改或优化过程中可能无意间更改或移除这些策略。此类非预期的变更可能引发架构侵蚀,逐步破坏系统的原始设计。尽管许多研究者已提出基于机器学习的方法以提高代码中ATs检测的精度,但黑盒特性及所需的架构领域知识为开发人员验证结果带来了显著挑战。有效的验证不仅需要精确的检测结果,还需可解释的说明以增强结果的可理解性。然而,当前研究在此方面存在关键空白。大型语言模型(LLMs)若具备领域知识,能够生成易于理解的ATs检测注释。通过微调LLMs获取领域知识面临灾难性遗忘和硬件限制等挑战。为此,我们提出Prmt4TD——一种基于小模型增强的提示框架,以提升ATs检测的精度与可理解性。将微调后的小模型与上下文学习相结合,亦可在降低微调成本的同时为LLM补充领域知识。Prmt4TD能够利用LLMs卓越的处理与推理能力,生成易于理解的ATs检测结果。我们的评估结果表明,Prmt4TD在ATs平衡数据集上实现了13%-23%的精度(\emph{F1分数})提升,并显著增强了检测结果的可理解性。