Researchers have investigated the potential of leveraging pre-trained language models, such as CodeBERT, to enhance source code-related tasks. Previous methodologies have relied on CodeBERT's '[CLS]' token as the embedding representation of input sequences for task performance, necessitating additional neural network layers to enhance feature representation, which in turn increases computational expenses. These approaches have also failed to fully leverage the comprehensive knowledge inherent within the source code and its associated text, potentially limiting classification efficacy. We propose CodeClassPrompt, a text classification technique that harnesses prompt learning to extract rich knowledge associated with input sequences from pre-trained models, thereby eliminating the need for additional layers and lowering computational costs. By applying an attention mechanism, we synthesize multi-layered knowledge into task-specific features, enhancing classification accuracy. Our comprehensive experimentation across four distinct source code-related tasks reveals that CodeClassPrompt achieves competitive performance while significantly reducing computational overhead.
翻译:研究人员已探索利用预训练语言模型(如CodeBERT)增强源代码相关任务的潜力。现有方法通常依赖CodeBERT的'[CLS]'标记作为输入序列的嵌入表示来完成目标任务,这需要额外添加神经网络层以增强特征表征能力,从而导致计算成本增加。此类方法尚未充分利用源代码及其关联文本中蕴含的丰富知识,可能限制分类性能。本文提出CodeClassPrompt文本分类技术,该技术通过提示学习从预训练模型中提取与输入序列相关的深层知识,从而无需附加网络层即可降低计算成本。通过引入注意力机制,我们将多层次知识融合为面向任务的特征表示,有效提升了分类精度。在四项不同类型的源代码相关任务上的综合实验表明,CodeClassPrompt在显著降低计算开销的同时,取得了具有竞争力的性能表现。