Modern Integrated Development Environments (IDEs) increasingly leverage Large Language Models (LLMs) to provide advanced features like code autocomplete. While powerful, training these models on user-written code introduces significant privacy risks, making the models themselves a new type of data vulnerability. Malicious actors can exploit this by launching attacks to reconstruct sensitive training data or infer whether a specific code snippet was used for training. This paper investigates the use of Differential Privacy (DP) as a robust defense mechanism for training an LLM for Kotlin code completion. We fine-tune a \texttt{Mellum} model using DP and conduct a comprehensive evaluation of its privacy and utility. Our results demonstrate that DP provides a strong defense against Membership Inference Attacks (MIAs), reducing the attack's success rate close to a random guess (AUC from 0.901 to 0.606). Furthermore, we show that this privacy guarantee comes at a minimal cost to model performance, with the DP-trained model achieving utility scores comparable to its non-private counterpart, even when trained on 100x less data. Our findings suggest that DP is a practical and effective solution for building private and trustworthy AI-powered IDE features.
翻译:现代集成开发环境(IDE)日益依赖大型语言模型(LLMs)来提供代码自动补全等高级功能。尽管功能强大,但基于用户编写代码训练这些模型会带来显著的隐私风险,使模型本身成为一种新型数据漏洞。恶意攻击者可通过发起攻击来重构敏感训练数据,或推断特定代码片段是否被用于训练,从而利用此漏洞。本文研究了差分隐私(DP)作为一种鲁棒防御机制,用于训练Kotlin代码补全的LLM。我们使用DP对\texttt{Mellum}模型进行微调,并对其隐私性和实用性进行全面评估。结果表明,DP能有效防御成员推理攻击(MIAs),将攻击成功率降至接近随机猜测水平(AUC从0.901降至0.606)。此外,我们证明这种隐私保障对模型性能影响极小:即使在训练数据量减少100倍的情况下,DP训练模型的实用性得分仍与非隐私模型相当。我们的研究结果表明,DP是构建私密且可信的AI驱动IDE功能的实用且有效的解决方案。