Transformer-based language models of code have achieved state-of-the-art performance across a wide range of software analytics tasks, but their practical deployment remains limited due to high computational costs, slow inference speeds, and significant environmental impact. To address these challenges, recent research has increasingly explored knowledge distillation as a method for compressing a large language model of code (the teacher) into a smaller model (the student) while maintaining performance. However, the degree to which a student model deeply mimics the predictive behavior and internal representations of its teacher remains largely unexplored, as current accuracy-based evaluation provides only a surface-level view of model quality and often fails to capture more profound discrepancies in behavioral fidelity between the teacher and student models. To address this gap, we empirically show that the student model often fails to deeply mimic the teacher model, resulting in up to 285% greater performance drop under adversarial attacks, which is not captured by traditional accuracy-based evaluation. Therefore, we propose MetaCompress, a metamorphic testing framework that systematically evaluates behavioral fidelity by comparing the outputs of teacher and student models under a set of behavior-preserving metamorphic relations. We evaluate MetaCompress on two widely studied tasks, using compressed versions of popular language models of code, obtained via three different knowledge distillation techniques: Compressor, AVATAR, and MORPH. The results show that MetaCompress identifies up to 62% behavioral discrepancies in student models, underscoring the need for behavioral fidelity evaluation within the knowledge distillation pipeline and establishing MetaCompress as a practical framework for testing compressed language models of code derived through knowledge distillation.
翻译:基于Transformer的代码语言模型已在广泛的软件分析任务中取得了最先进的性能,但其实际部署仍受限于高昂的计算成本、缓慢的推理速度及显著的环境影响。为应对这些挑战,近期研究日益探索知识蒸馏作为一种压缩大型代码语言模型(教师模型)为较小模型(学生模型)并保持性能的方法。然而,学生模型在多大程度上深度模仿其教师模型的预测行为与内部表征仍很大程度上未被探索,因为当前基于准确率的评估仅提供模型质量的表层视图,往往未能捕捉教师与学生模型之间行为保真度上更深刻的差异。为填补这一空白,我们通过实证表明学生模型常未能深度模仿教师模型,导致在对抗攻击下性能下降高达285%,而这一现象未被传统基于准确率的评估所捕获。为此,我们提出MetaCompress,一个蜕变测试框架,通过在一组行为保持的蜕变关系下比较教师与学生模型的输出,系统评估行为保真度。我们在两个广泛研究的任务上评估MetaCompress,使用通过三种不同知识蒸馏技术(Compressor、AVATAR与MORPH)获得的流行代码语言模型的压缩版本。结果表明,MetaCompress在学生模型中识别出高达62%的行为差异,强调了在知识蒸馏流程中进行行为保真度评估的必要性,并确立了MetaCompress作为测试通过知识蒸馏衍生的压缩代码语言模型的实用框架。