Linear queries, as the basis of broad analysis tasks, are often released through privacy mechanisms based on differential privacy (DP), the most popular framework for privacy protection. However, DP adopts a context-free definition that operates independently of the data-generating distribution. In this paper, we revisit the privacy analysis of the Laplace mechanism through the lens of pointwise maximal leakage (PML). We demonstrate that the distribution-agnostic definition of the DP framework often mandates excessive noise. To address this, we incorporate an assumption about the prior distribution by lower-bounding the probability of any single record belonging to any specific class. With this assumption, we derive a tight, context-aware leakage bound for general linear queries, and prove that our derived bound is strictly tighter than the standard DP guarantee and converges to the DP guarantee as this probability lower bound approaches zero. Numerical evaluations demonstrate that by exploiting this prior knowledge, the required noise scale can be reduced while maintaining privacy guarantees.
翻译:线性查询作为广泛分析任务的基础,通常通过基于差分隐私(DP)的隐私机制进行发布,差分隐私是目前最主流的隐私保护框架。然而,差分隐私采用了一种独立于数据生成分布的、上下文无关的定义。在本文中,我们通过点态最大泄漏(PML)的视角重新审视了拉普拉斯机制的隐私分析。我们证明,DP框架这种与分布无关的定义通常要求添加过量的噪声。为了解决这个问题,我们引入了一个关于先验分布的假设,即对任何单一记录属于任何特定类别的概率设定一个下界。基于这一假设,我们为一般线性查询推导出了一个紧致的、上下文感知的泄漏边界,并证明了我们推导出的边界严格优于标准的DP保证,且当该概率下界趋近于零时,我们的边界收敛于DP保证。数值评估表明,通过利用这种先验知识,可以在保持隐私保证的同时,降低所需的噪声规模。