In the field of code intelligence, effectively modeling long-range code poses a significant challenge. Existing pre-trained language models (PLMs) such as UniXcoder have achieved remarkable success, but they still face difficulties with long code inputs. This is mainly due to their limited capacity to maintain contextual continuity and memorize the key information over long-range code. To alleviate the difficulties, we propose EXPO, a framework for EXtending Pre-trained language models for lOng-range code. EXPO incorporates two innovative memory mechanisms we propose in this paper: Bridge Memory and Hint Memory. Bridge Memory uses a tagging mechanism to connect disparate snippets of long-range code, helping the model maintain contextual coherence. Hint Memory focuses on crucial code elements throughout the global context, such as package imports, by integrating a kNN attention layer to adaptively select the relevant code elements. This dual-memory approach bridges the gap between understanding local code snippets and maintaining global code coherence, thereby enhancing the model overall comprehension of long code sequences. We validate the effectiveness of EXPO on five popular pre-trained language models such as UniXcoder and two code intelligence tasks including API recommendation and vulnerability detection. Experimental results demonstrate that EXPO significantly improves the pre-training language models.
翻译:在代码智能领域,有效建模长程代码是一项重大挑战。现有预训练语言模型(如UniXcoder)虽已取得显著成功,但在处理长代码输入时仍面临困难,主要原因是其维持上下文连贯性及在长程代码中记忆关键信息的能力有限。为缓解这些问题,我们提出EXPO框架——一种扩展预训练语言模型以处理长程代码的框架。EXPO集成了本文提出的两种创新性记忆机制:桥接记忆与提示记忆。桥接记忆通过标签机制连接长程代码中分散的片段,帮助模型保持上下文连贯性;提示记忆则聚焦全局上下文中的关键代码元素(如包导入),通过集成k近邻注意力层自适应选择相关代码元素。这种双记忆方法填补了理解局部代码片段与维持全局代码连贯性之间的鸿沟,从而增强模型对长代码序列的整体理解能力。我们在五种主流预训练语言模型(如UniXcoder)及两项代码智能任务(包括API推荐与漏洞检测)上验证了EXPO的有效性。实验结果表明,EXPO显著提升了预训练语言模型的性能。