Evaluating the relevance of data is a critical task for model builders seeking to acquire datasets that enhance model performance. Ideally, such evaluation should allow the model builder to assess the utility of candidate data without exposing proprietary details of the model. At the same time, data providers must be assured that no information about their data - beyond the computed utility score - is disclosed to the model builder. In this paper, we present PrivaDE, a cryptographic protocol for privacy-preserving utility scoring and selection of data for machine learning. While prior works have proposed data evaluation protocols, our approach advances the state of the art through a practical, blockchain-centric design. Leveraging the trustless nature of blockchains, PrivaDE enforces malicious-security guarantees and ensures strong privacy protection for both models and datasets. To achieve efficiency, we integrate several techniques - including model distillation, model splitting, and cut-and-choose zero-knowledge proofs - bringing the runtime to a practical level. Furthermore, we propose a unified utility scoring function that combines empirical loss, predictive entropy, and feature-space diversity, and that can be seamlessly integrated into active-learning workflows. Evaluation shows that PrivaDE performs data evaluation effectively, achieving online runtimes within 15 minutes even for models with millions of parameters. Our work lays the foundation for fair and automated data marketplaces in decentralized machine learning ecosystems.
翻译:评估数据相关性是模型构建者获取能提升模型性能的数据集的关键任务。理想情况下,此类评估应允许模型构建者在无需暴露模型专有细节的前提下,评估候选数据的效用。同时,必须确保数据提供方确信除计算得出的效用分数外,其数据信息不会泄露给模型构建者。本文提出PrivaDE,一种用于机器学习数据隐私保护效用评分与选择的密码学协议。尽管已有研究提出了数据评估协议,但我们的方法通过一种实用且以区块链为中心的设计推进了现有技术水平。PrivaDE利用区块链的无信任特性,强制执行恶意安全保证,并为模型和数据集提供强隐私保护。为实现高效性,我们整合了多种技术——包括模型蒸馏、模型分割和切分选择零知识证明——将运行时间降至实用水平。此外,我们提出了一种统一的效用评分函数,该函数综合了经验损失、预测熵和特征空间多样性,并可无缝集成到主动学习工作流中。评估表明,PrivaDE能有效执行数据评估,即使对于参数数量达百万级的模型,其在线运行时间也能控制在15分钟以内。本研究为去中心化机器学习生态系统中公平、自动化的数据市场奠定了基础。