This paper explores the implications of guaranteeing privacy by imposing a lower bound on the information density between the private and the public data. We introduce a novel and operationally meaningful privacy measure called pointwise maximal cost (PMC) and demonstrate that imposing an upper bound on PMC is equivalent to enforcing a lower bound on the information density. PMC quantifies the information leakage about a secret to adversaries who aim to minimize non-negative cost functions after observing the outcome of a privacy mechanism. When restricted to finite alphabets, PMC can equivalently be defined as the information leakage to adversaries aiming to minimize the probability of incorrectly guessing randomized functions of the secret. We study the properties of PMC and apply it to standard privacy mechanisms to demonstrate its practical relevance. Through a detailed examination, we connect PMC with other privacy measures that impose upper or lower bounds on the information density. These are pointwise maximal leakage (PML), local differential privacy (LDP), and (asymmetric) local information privacy. In particular, we show that a mechanism satisfies LDP if and only if it has both bounded PMC and bounded PML. Overall, our work fills a conceptual and operational gap in the taxonomy of privacy measures, bridges existing disconnects between different frameworks, and offers insights for selecting a suitable notion of privacy in a given application.
翻译:本文探讨了通过施加私有数据与公开数据之间信息密度下界来保证隐私的深层含义。我们引入了一种新颖且具有操作意义的隐私度量指标——逐点最大代价(PMC),并证明对PMC施加上界等价于对信息密度施加下界。PMC量化了秘密信息对敌手的泄露程度,这些敌手旨在观察隐私机制输出后最小化非负代价函数。当限制在有限字母表时,PMC可等价定义为秘密随机化函数错误猜测概率最小化敌手的信息泄露量。我们研究了PMC的性质,并将其应用于标准隐私机制以证明其实用价值。通过详细分析,我们将PMC与其他对信息密度施加上下界的隐私度量联系起来,包括逐点最大泄露(PML)、本地差分隐私(LDP)以及(非对称)本地信息隐私。特别地,我们证明当且仅当机制同时具有有界PMC和有界PML时,该机制满足LDP。总体而言,我们的研究填补了隐私度量分类体系中的概念性与操作性空白,弥合了不同框架间现有的脱节现象,并为特定应用中选取合适隐私概念提供了理论依据。