This paper explores the implications of guaranteeing privacy by imposing a lower bound on the information density between the private and the public data. We introduce an operationally meaningful privacy measure called pointwise maximal cost (PMC) and demonstrate that imposing an upper bound on PMC is equivalent to enforcing a lower bound on the information density. PMC quantifies the information leakage about a secret to adversaries who aim to minimize non-negative cost functions after observing the outcome of a privacy mechanism. When restricted to finite alphabets, PMC can equivalently be defined as the information leakage to adversaries aiming to minimize the probability of incorrectly guessing randomized functions of the secret. We study the properties of PMC and apply it to standard privacy mechanisms to demonstrate its practical relevance. Through a detailed examination, we connect PMC with other privacy measures that impose upper or lower bounds on the information density. Our results highlight that lower bounding the information density is a more stringent requirement than upper bounding it. Overall, our work significantly bridges the gaps in understanding the relationships between various privacy frameworks and provides insights for selecting a suitable framework for a given application.
翻译:本文探讨了通过约束私有数据与公开数据之间信息密度下界来保障隐私的理论意义。我们提出了一种具有操作意义的隐私度量指标——逐点最大代价(PMC),并证明对PMC施加上界约束等价于对信息密度施加下界约束。PMC量化了秘密信息向对抗方的泄露程度,这些对抗方在观察到隐私机制输出结果后,旨在最小化非负代价函数。当限于有限字母表时,PMC可等价定义为:秘密信息随机函数错误猜测概率最小化对抗场景下的信息泄露度量。我们系统研究了PMC的性质,并将其应用于标准隐私机制以论证其实际相关性。通过深入分析,我们将PMC与其它约束信息密度上界或下界的隐私度量方法建立了理论关联。研究结果表明:对信息密度施加下界约束是比上界约束更为严格的要求。总体而言,本研究显著弥合了不同隐私框架间关系理解的空白,为特定应用场景下隐私框架的选择提供了理论依据。