This position paper argues that setting the privacy budget in differential privacy should not be viewed as an important limitation of differential privacy compared to alternative methods for privacy-preserving machine learning. The so-called problem of interpreting the privacy budget is often presented as a major hindrance to the wider adoption of differential privacy in real-world deployments and is sometimes used to promote alternative mitigation techniques for data protection. We believe this misleads decision-makers into choosing unsafe methods. We argue that the difficulty in interpreting privacy budgets does not stem from the definition of differential privacy itself, but from the intrinsic difficulty of estimating privacy risks in context, a challenge that any rigorous method for privacy risk assessment face. Moreover, we claim that any sound method for estimating privacy risks should, given the current state of research, be expressible within the differential privacy framework or justify why it cannot.
翻译:本立场论文认为,与隐私保护机器学习的替代方法相比,差分隐私中隐私预算的设置不应被视为差分隐私的重要限制。隐私预算的所谓解释问题常被描述为差分隐私在实际部署中更广泛采用的主要障碍,有时甚至被用来推广数据保护的其他缓解技术。我们认为这会误导决策者选择不安全的方法。我们主张,隐私预算解释的困难并非源于差分隐私定义本身,而是源于在具体情境中评估隐私风险的内在困难——这是任何严谨的隐私风险评估方法都面临的挑战。此外,我们主张在当前研究状态下,任何合理的隐私风险评估方法都应能在差分隐私框架内表达,或需论证其无法在该框架内表达的原因。