Privacy computing receives increasing attention but writing privacy computing code remains challenging for developers due to limited library functions, necessitating function implementation from scratch, and data-oblivious requirement, contradicting intuitive thinking and usual practices of programmers. Automating the generation of privacy computing code with Large Language Models can streamline development effort and lower the barrier to using privacy computing frameworks. However, existing LLMs still encounter challenges in code translation for privacy-preserving computation, such as translating Python to MP-SPDZ, due to the scarcity of MP-SPDZ data required for effective pre-training or fine-tuning. Moreover, the lack of a benchmark further complicates the evaluation of translation quality. To address the limitations, this work proposes SPDZCoder, a rule-based framework that combines LLMs with expert knowledge for generating privacy-computing code without requiring additional training data. Specifically, SPDZCoder employ a rigorous procedure for collecting high-quality expert knowledge to represent the semantic-expressing differences between Python and MP-SPDZ, and to derive transformation rules for translating Python to MP-SPDZ based on these knowledge. Then, SPDZCoder progressively converts Python code into MP-SPDZ code using transformation rules in a three stage pipeline. To evaluate SPDZCoder, we manually constructed a benchmark dataset, SPDZEval, which comprises six data splits, each representing a distinct class of challenging tasks in MP-SPDZ implementation. Extensive experiments show that SPDZCoder achieves superior performance, significantly surpassing baselines in pass@1 and pass@2. Specifically, SPDZCoder attains an overall correctness of 85.94% and 92.01% in pass@1 and pass@2, respectively, whereas the best-performing baseline achieves 63.58% and 76.36%, respectively.
翻译:隐私计算日益受到关注,但由于库函数有限、需要从零实现功能,以及数据不可知性要求与程序员的直观思维和常规实践相悖,开发者编写隐私计算代码仍面临挑战。利用大语言模型自动生成隐私计算代码可简化开发工作并降低使用隐私计算框架的门槛。然而,现有大语言模型在隐私保护计算的代码翻译任务(例如将Python代码转换为MP-SPDZ代码)中仍面临挑战,这主要源于有效预训练或微调所需的MP-SPDZ数据稀缺。此外,基准测试集的缺失进一步增加了翻译质量评估的复杂性。为应对这些局限,本研究提出SPDZCoder——一个基于规则的框架,该框架将大语言模型与专家知识相结合,无需额外训练数据即可生成隐私计算代码。具体而言,SPDZCoder采用严谨流程收集高质量专家知识,以表征Python与MP-SPDZ之间的语义表达差异,并基于这些知识推导出Python到MP-SPDZ的转换规则。随后,SPDZCoder通过三阶段流水线,运用转换规则逐步将Python代码转化为MP-SPDZ代码。为评估SPDZCoder,我们手动构建了基准数据集SPDZEval,该数据集包含六个数据子集,每个子集代表MP-SPDZ实现中一类具有挑战性的任务。大量实验表明,SPDZCoder在pass@1和pass@2指标上均显著超越基线模型,展现出优越性能。具体而言,SPDZCoder在pass@1和pass@2上的整体正确率分别达到85.94%和92.01%,而表现最佳的基线模型仅分别达到63.58%和76.36%。