Context engineering has emerged as a pivotal paradigm for unlocking the potential of Large Language Models (LLMs) in Software Engineering (SE) tasks, enabling performance gains at test time without model fine-tuning. Despite its success, existing research lacks a systematic taxonomy of SE-specific context types and a dedicated benchmark to quantify the heterogeneous effects of different contexts across core SE workflows. To address this gap, we propose CL4SE (Context Learning for Software Engineering), a comprehensive benchmark featuring a fine-grained taxonomy of four SE-oriented context types (interpretable examples, project-specific context, procedural decision-making context, and positive & negative context), each mapped to a representative task (code generation, code summarization, code review, and patch correctness assessment). We construct high-quality datasets comprising over 13,000 samples from more than 30 open-source projects and evaluate five mainstream LLMs across nine metrics. Extensive experiments demonstrate that context learning yields an average performance improvement of 24.7% across all tasks. Specifically, procedural context boosts code review performance by up to 33% (Qwen3-Max), mixed positive-negative context improves patch assessment by 30% (DeepSeek-V3), project-specific context increases code summarization BLEU by 14.78% (GPT-Oss-120B), and interpretable examples enhance code generation PASS@1 by 5.72% (DeepSeek-V3). CL4SE establishes the first standardized evaluation framework for SE context learning, provides actionable empirical insights into task-specific context design, and releases a large-scale dataset to facilitate reproducible research in this domain.
翻译:上下文工程已成为释放大型语言模型在软件工程任务中潜力的关键范式,能够在无需微调模型的情况下提升测试时性能。尽管已取得成效,现有研究仍缺乏针对软件工程特定上下文类型的系统化分类体系,以及用于量化不同上下文在核心软件工程工作流中异质影响的专用基准。为填补这一空白,我们提出CL4SE(面向软件工程的上下文学习),这是一个包含细粒度分类的综合性基准,涵盖四种面向软件工程的上下文类型(可解释示例、项目特定上下文、程序化决策上下文、正负向混合上下文),每种类型均映射至代表性任务(代码生成、代码摘要、代码审查、补丁正确性评估)。我们构建了高质量数据集,包含来自30余个开源项目的超过13,000个样本,并通过九项指标评估了五种主流大型语言模型。大量实验表明,上下文学习在所有任务中平均带来24.7%的性能提升。具体而言:程序化上下文将代码审查性能最高提升33%(Qwen3-Max),正负向混合上下文将补丁评估性能提升30%(DeepSeek-V3),项目特定上下文使代码摘要BLEU指标提升14.78%(GPT-Oss-120B),可解释示例将代码生成PASS@1指标提升5.72%(DeepSeek-V3)。CL4SE建立了首个软件工程上下文学习的标准化评估框架,为任务特定上下文设计提供了可操作的实证见解,并发布了大规模数据集以促进该领域可复现研究。