In the context of newly release software frameworks, large language models (LLMs) often exhibit poor performance and a high rate of hallucination, as they are not exposed to such environments during training. Although inference-time augmentation techniques such as retrieval-augmented generation (RAG) can partially mitigate hallucinations, knowledge injection through prompting alone is insufficient to enable models to fully understand the intrinsic relationships among different components of a codebase, or to reason about the correct compositions and apply. Although explicit knowledge injection can be achieved through post-training, compared with public code domains, unseen codebases typically provide only source code and lack large volumes of high-quality, usage-oriented code that can be directly leveraged as training data. Consequently, existing data synthesis approaches are insufficient to adequately capture unseen codebases usage scenarios when restricted to source code alone. To address these challenges, we propose UCD-Training, a two-stage training framework for reasoning-aware data synthesis grounded in a code graph constructed from unseen codebases. UCD-Training first parses the source code to build a code graph, then conducts dependency-preserving continued pretraining (CPT) using file-level dependency data, followed by graph-grounded supervised fine-tuning (SFT) on three types of synthesized data augmented with explicit reasoning traces: (1) single-hop relation reasoning data, (2) compositional API reasoning data, and (3) codebase utilization data. We further introduce a new benchmark, UnseenCodeBench, for code generation on unseen codebases and conduct comprehensive experiments across multiple codebases.
翻译:在新发布的软件框架背景下,大型语言模型(LLMs)由于训练过程中未接触此类环境,通常表现不佳且产生幻觉的比率较高。尽管检索增强生成(RAG)等推理时增强技术可以部分缓解幻觉问题,但仅通过提示进行知识注入不足以使模型完全理解代码库中不同组件之间的内在关联,也无法正确推理其组合方式与应用场景。虽然通过后训练可以实现显式知识注入,但与公共代码领域相比,未知代码库通常仅提供源代码,缺乏大量可直接用作训练数据的高质量、面向使用的代码。因此,当仅限于源代码时,现有数据合成方法不足以充分捕捉未知代码库的使用场景。为应对这些挑战,我们提出了UCD-Training——一个基于未知代码库构建的代码图进行推理感知数据合成的两阶段训练框架。UCD-Training首先解析源代码以构建代码图,随后利用文件级依赖数据进行依赖关系保持的持续预训练(CPT),接着在三种增强了显式推理轨迹的合成数据上进行基于图的监督微调(SFT):(1)单跳关系推理数据,(2)组合式API推理数据,以及(3)代码库利用数据。我们进一步引入了用于未知代码库代码生成的新基准测试UnseenCodeBench,并在多个代码库上进行了全面实验。