Large Language Models (LLMs) have become increasingly integral to enhancing developer productivity, particularly in code generation, comprehension, and repair tasks. However, fine-tuning these models with high-quality, real-world data is challenging due to privacy concerns and the lack of accessible, labeled datasets. In this paper, we present DialogAgent, an automated tool for generating synthetic training data that closely mimics real developer interactions within Integrated Development Environments (IDEs). DialogAgent enables the production of diverse, high-fidelity query-response pairs by simulating multi-turn dialogues and contextual behaviors observed in real-world programming scenarios. The tool significantly reduces the reliance on manual data generation, increasing efficiency by 4.8 times compared to traditional methods. Our experiments and online deployment demonstrate substantial improvements in model performance for code-related question-answering tasks: the acceptance rate of responses generated by our in-house model is improved by 33%, after training on synthesized data generated by DialogAgent.
翻译:大型语言模型(LLMs)在提升开发者生产力方面日益重要,特别是在代码生成、理解与修复任务中。然而,由于隐私问题及缺乏可访问的标注数据集,使用高质量真实数据对这些模型进行微调仍具挑战性。本文提出DialogAgent,一种用于生成合成训练数据的自动化工具,其能够高度模拟集成开发环境(IDEs)中真实的开发者交互行为。DialogAgent通过模拟多轮对话及真实编程场景中观察到的上下文行为,能够生成多样化、高保真度的查询-响应对。该工具显著降低了对人工数据生成的依赖,与传统方法相比效率提升了4.8倍。我们的实验及线上部署结果表明,在基于DialogAgent生成的合成数据训练后,模型在代码相关问答任务上的性能得到显著提升:我们内部模型生成响应的接受率提高了33%。