Training Large Language Models (LLMs) with synthetic data is a prevalent practice in code generation. A key approach is self-training, where LLMs are iteratively trained on self-generated correct code snippets. In this case, the self-generated codes are drawn from a conditional distribution, conditioned on a specific seed description. However, the seed description is not the only valid representation that aligns with its intended meaning. With all valid descriptions and codes forming a joint space, codes drawn from the conditional distribution would lead to an underrepresentation of the full description-code space. As such, we propose Gibbs Fine-Tuning (GiFT), a novel self-training method inspired by Gibbs sampling. GiFT allows self-generated data to be drawn from the marginal distribution of the joint space, thereby mitigating the biases inherent in conditional sampling. We provide a theoretical analysis demonstrating the potential benefits of fine-tuning LLMs with code derived from the marginal distribution. Furthermore, we propose a perplexity-based code selection method to mitigate the imbalanced long-tail distribution of the self-generated codes. Empirical evaluation of two LLMs across four datasets demonstrates that GiFT achieves superior performance, particularly on more challenging benchmarks.
翻译:使用合成数据训练大型语言模型(LLM)是代码生成领域的一种普遍做法。其中一种关键方法是自训练,即通过迭代训练LLM使用其自身生成的正確代码片段。在此过程中,自生成代码是从一个条件分布中采样的,该分布以特定的种子描述为条件。然而,该种子描述并非唯一符合其预期含义的有效表示。由于所有有效的描述和代码构成了一个联合空间,仅从条件分布中采样代码将导致对完整描述-代码空间的表征不足。为此,我们提出吉布斯微调(GiFT),这是一种受吉布斯采样启发的新型自训练方法。GiFT允许从联合空间的边缘分布中采样自生成数据,从而缓解条件采样固有的偏差。我们提供了理论分析,证明了使用从边缘分布导出的代码对LLM进行微调的潜在优势。此外,我们提出了一种基于困惑度的代码选择方法,以缓解自生成代码中不平衡的长尾分布问题。在两个LLM和四个数据集上的实证评估表明,GiFT实现了更优的性能,尤其是在更具挑战性的基准测试中。