Designing privacy-preserving machine learning algorithms has received great attention in recent years, especially in the setting when the data contains sensitive information. Differential privacy (DP) is a widely used mechanism for data analysis with privacy guarantees. In this paper, we produce a differentially private random feature model. Random features, which were proposed to approximate large-scale kernel machines, have been used to study privacy-preserving kernel machines as well. We consider the over-parametrized regime (more features than samples) where the non-private random feature model is learned via solving the min-norm interpolation problem, and then we apply output perturbation techniques to produce a private model. We show that our method preserves privacy and derive a generalization error bound for the method. To the best of our knowledge, we are the first to consider privacy-preserving random feature models in the over-parametrized regime and provide theoretical guarantees. We empirically compare our method with other privacy-preserving learning methods in the literature as well. Our results show that our approach is superior to the other methods in terms of generalization performance on synthetic data and benchmark data sets. Additionally, it was recently observed that DP mechanisms may exhibit and exacerbate disparate impact, which means that the outcomes of DP learning algorithms vary significantly among different groups. We show that both theoretically and empirically, random features have the potential to reduce disparate impact, and hence achieve better fairness.
翻译:设计隐私保护的机器学习算法近年来受到广泛关注,特别是在数据包含敏感信息的场景下。差分隐私(DP)是一种广泛使用的、具有隐私保证的数据分析机制。本文提出了一种差分隐私随机特征模型。随机特征最初被提出用于近似大规模核机器,现已被用于研究隐私保护的核机器学习。我们考虑过参数化机制(特征数多于样本数),其中非隐私随机特征模型通过求解最小范数插值问题学习,随后应用输出扰动技术生成隐私模型。我们证明了该方法能保护隐私,并推导了其泛化误差界。据我们所知,我们是首个在过参数化机制下研究隐私保护随机特征模型并提供理论保证的工作。我们还通过实验将本方法与文献中其他隐私保护学习方法进行比较。结果表明,在合成数据和基准数据集上,本方法在泛化性能方面优于其他方法。此外,近期研究发现差分隐私机制可能呈现并加剧差异性影响,即DP学习算法在不同群体间的结果存在显著差异。我们从理论和实验两方面证明,随机特征具有减轻差异性影响的潜力,从而能够实现更好的公平性。