Large language models (LLMs) have made significant advancements in code-related tasks, yet many LLMs treat code as simple sequences, neglecting its structured nature. We introduce AST-T5, a novel pretraining paradigm that leverages the Abstract Syntax Tree (AST) for enhanced code generation, transpilation, and understanding. Using dynamic programming, our AST-Aware Segmentation retains code structure, while our AST-Aware Span Corruption objective equips the model to reconstruct various code structures. Unlike other models, AST-T5 avoids intricate program analyses or architectural changes, so it integrates seamlessly with any encoder-decoder Transformer. Evaluations show that AST-T5 consistently outperforms similar-sized LMs across various code-related tasks. Structure-awareness makes AST-T5 particularly powerful in code-to-code tasks, surpassing CodeT5 by 2 points in exact match score for the Bugs2Fix task and by 3 points in exact match score for Java-C# Transpilation in CodeXGLUE. Our code and model are publicly available at https://github.com/gonglinyuan/ast_t5.
翻译:大型语言模型(LLM)在代码相关任务中取得了显著进展,但许多LLM将代码视为简单序列,忽略了其结构化特性。本文提出AST-T5,一种利用抽象语法树(AST)增强代码生成、转译与理解的新型预训练范式。通过动态规划实现的AST感知分割技术保留了代码结构,而AST感知跨度损坏目标使模型能够重构各类代码结构。与其他模型不同,AST-T5无需复杂程序分析或架构修改,可无缝集成至任意编码器-解码器Transformer架构。实验表明,AST-T5在各类代码任务中持续优于同规模语言模型。其结构感知特性使AST-T5在代码间转换任务中表现尤为突出:在CodeXGLUE基准的Bugs2Fix任务上精确匹配分数超越CodeT5达2分,在Java-C#转译任务上精确匹配分数提升3分。代码与模型已开源:https://github.com/gonglinyuan/ast_t5。