Code generation tasks aim to automate the conversion of user requirements into executable code, significantly reducing manual development efforts and enhancing software productivity. The emergence of large language models (LLMs) has significantly advanced code generation, though their efficiency is still impacted by certain inherent architectural constraints. Each token generation necessitates a complete inference pass, requiring persistent retention of contextual information in memory and escalating resource consumption. While existing research prioritizes inference-phase optimizations such as prompt compression and model quantization, the generation phase remains underexplored. To tackle these challenges, we propose a knowledge-infused framework named ShortCoder, which optimizes code generation efficiency while preserving semantic equivalence and readability. In particular, we introduce: (1) ten syntax-level simplification rules for Python, derived from AST-preserving transformations, achieving 18.1% token reduction without functional compromise; (2) a hybrid data synthesis pipeline integrating rule-based rewriting with LLM-guided refinement, producing ShorterCodeBench, a corpus of validated tuples of original code and simplified code with semantic consistency; (3) a fine-tuning strategy that injects conciseness awareness into the base LLMs. Extensive experimental results demonstrate that ShortCoder consistently outperforms state-of-the-art methods on HumanEval, achieving an improvement of 18.1%-37.8% in generation efficiency over previous methods while ensuring the performance of code generation.
翻译:代码生成任务旨在将用户需求自动转换为可执行代码,从而显著减少人工开发工作量并提升软件生产效率。大型语言模型(LLMs)的出现极大推动了代码生成技术的发展,但其效率仍受某些固有架构限制的影响。每个令牌的生成都需要完整的推理过程,这要求持续将上下文信息保留在内存中,从而增加了资源消耗。现有研究多集中于推理阶段的优化(如提示压缩和模型量化),而生成阶段的优化仍待深入探索。为应对这些挑战,我们提出了一种名为ShortCoder的知识增强框架,该框架在保持语义等价性和可读性的同时优化代码生成效率。具体而言,我们提出了:(1)基于抽象语法树保持变换推导的十项Python语法级简化规则,在保证功能完整性的前提下实现了18.1%的令牌缩减;(2)融合基于规则重写与LLM引导优化的混合数据合成流程,构建了经过语义一致性验证的原始代码与简化代码对语料库ShorterCodeBench;(3)一种向基础LLMs注入简洁性感知的微调策略。大量实验结果表明,ShortCoder在HumanEval基准上持续优于现有最优方法,在确保代码生成性能的同时,相比先前方法将生成效率提升了18.1%-37.8%。