A Private Repetition algorithm takes as input a differentially private algorithm with constant success probability and boosts it to one that succeeds with high probability. These algorithms are closely related to private metaselection algorithms that compete with the best of many private algorithms, and private hyperparameter tuning algorithms that compete with the best hyperparameter settings for a private learning algorithm. Existing algorithms for these tasks pay either a large overhead in privacy cost, or a large overhead in computational cost. In this work, we show strong lower bounds for problems of this kind, showing in particular that for any algorithm that preserves the privacy cost up to a constant factor, the failure probability can only fall polynomially in the computational overhead. This is in stark contrast with the non-private setting, where the failure probability falls exponentially in the computational overhead. By carefully combining existing algorithms for metaselection, we prove computation-privacy tradeoffs that nearly match our lower bounds.
翻译:隐私重复算法接收一个具有恒定成功概率的差分隐私算法作为输入,并将其提升为具有高成功概率的算法。这类算法与隐私元选择算法密切相关,后者能够在众多隐私算法中与最优者竞争;同时也与隐私超参数调优算法紧密相关,这类算法旨在为隐私学习算法寻找最优超参数设置。现有解决这些任务的算法要么需要付出巨大的隐私成本开销,要么需要承担高昂的计算成本开销。在本研究中,我们针对此类问题提出了强有力的下界证明,特别指出:对于任何能将隐私成本保持在常数倍范围内的算法,其失败概率仅能随计算开销的增加呈多项式下降。这与非隐私场景形成鲜明对比——在非隐私设置中,失败概率随计算开销的增加呈指数下降。通过精心整合现有的元选择算法,我们证明了计算-隐私权衡关系几乎完全匹配我们所建立的下界。