Essential for an unfettered data market is the ability to discreetly select and evaluate training data before finalizing a transaction between the data owner and model owner. To safeguard the privacy of both data and model, this process involves scrutinizing the target model through Multi-Party Computation (MPC). While prior research has posited that the MPC-based evaluation of Transformer models is excessively resource-intensive, this paper introduces an innovative approach that renders data selection practical. The contributions of this study encompass three pivotal elements: (1) a groundbreaking pipeline for confidential data selection using MPC, (2) replicating intricate high-dimensional operations with simplified low-dimensional MLPs trained on a limited subset of pertinent data, and (3) implementing MPC in a concurrent, multi-phase manner. The proposed method is assessed across an array of Transformer models and NLP/CV benchmarks. In comparison to the direct MPC-based evaluation of the target model, our approach substantially reduces the time required, from thousands of hours to mere tens of hours, with only a nominal 0.20% dip in accuracy when training with the selected data.
翻译:健全的数据市场需在数据所有者与模型所有者达成交易前,具备对训练数据进行审慎筛选与评估的能力。为保护数据与模型的隐私,该过程需通过多方安全计算对目标模型进行审查。尽管先前研究表明,基于多方安全计算的 Transformer 模型评估需要消耗极大资源,但本文提出了一种创新方法,使数据选择变得切实可行。本研究贡献包含三个核心要素:(1)基于多方安全计算的机密数据选择创新流程;(2)利用在有限相关数据子集上训练的简化低维多层感知机,复现复杂的高维操作;(3)以并发多阶段方式实现多方安全计算。所提方法在多种 Transformer 模型及自然语言处理/计算机视觉基准测试中进行了评估。与直接基于多方安全计算的模型评估相比,本方法将所需时间从数千小时大幅缩短至数十小时,且使用所选数据训练时的精度仅下降 0.20%。