The distinguishability quantified by information measures after being processed by a private mechanism has been a useful tool in studying various statistical and operational tasks while ensuring privacy. To this end, standard data-processing inequalities and strong data-processing inequalities (SDPI) are employed. Most of the previously known and even tight characterizations of contraction of information measures, including total variation distance, hockey-stick divergences, and $f$-divergences, are applicable for $(\varepsilon,0)$-local differential private (LDP) mechanisms. In this work, we derive both linear and non-linear strong data-processing inequalities for hockey-stick divergence and $f$-divergences that are valid for all $(\varepsilon,δ)$-LDP mechanisms even when $δ\neq 0$. Our results either generalize or improve the previously known bounds on the contraction of these distinguishability measures.
翻译:经过隐私机制处理后,由信息度量量化的可区分性已成为在确保隐私的同时研究各种统计与操作任务的有效工具。为此,研究者采用了标准数据处理不等式与强数据处理不等式(SDPI)。先前已知的关于信息度量收缩的多数特征(包括全变差距离、曲棍球棍散度与$f$-散度),甚至其紧致刻画,主要适用于$(\varepsilon,0)$-局部差分隐私(LDP)机制。在本工作中,我们针对曲棍球棍散度与$f$-散度,推导了适用于所有$(\varepsilon,\delta)$-LDP机制(即使当$\delta\neq 0$时)的线性与非线性强数据处理不等式。我们的结果推广或改进了先前已知的这些可区分性度量收缩的界。