Language models have proven successful across a wide range of software engineering tasks, but their significant computational costs often hinder their practical adoption. To address this challenge, researchers have begun applying various compression strategies to improve the efficiency of language models for code. These strategies aim to optimize inference latency and memory usage, though often at the cost of reduced model effectiveness. However, there is still a significant gap in understanding how these strategies influence the efficiency and effectiveness of language models for code. Here, we empirically investigate the impact of three well-known compression strategies -- knowledge distillation, quantization, and pruning -- across three different classes of software engineering tasks: vulnerability detection, code summarization, and code search. Our findings reveal that the impact of these strategies varies greatly depending on the task and the specific compression method employed. Practitioners and researchers can use these insights to make informed decisions when selecting the most appropriate compression strategy, balancing both efficiency and effectiveness based on their specific needs.
翻译:语言模型已在众多软件工程任务中展现出卓越性能,但其高昂的计算成本往往阻碍了实际应用。为应对这一挑战,研究者开始采用多种压缩策略来提升代码语言模型的效率。这些策略旨在优化推理延迟和内存使用,但通常以模型性能下降为代价。然而,关于这些策略如何影响代码语言模型的效率与性能,目前仍存在显著认知空白。本文通过实证研究,系统评估了三种主流压缩策略——知识蒸馏、量化和剪枝——在漏洞检测、代码摘要与代码搜索三类软件工程任务中的表现。研究发现,压缩策略的影响效果因任务类型与具体压缩方法的不同而存在显著差异。实践者与研究者可依据这些发现,根据具体需求在效率与性能间权衡,从而选择最合适的压缩策略。