In recent years, Gaussian noise has become a popular tool in differentially private algorithms, often replacing Laplace noise which dominated the early literature. Gaussian noise is the standard approach to $\textit{approximate}$ differential privacy, often resulting in much higher utility than traditional (pure) differential privacy mechanisms. In this paper we argue that Laplace noise may in fact be preferable to Gaussian noise in many settings, in particular for $(\varepsilon,\delta)$-differential privacy when $\delta$ is small. We consider two scenarios: First, we consider the problem of counting under continual observation and present a new generalization of the binary tree mechanism that uses a $k$-ary number system with $\textit{negative digits}$ to improve the privacy-accuracy trade-off. Our mechanism uses Laplace noise and whenever $\delta$ is sufficiently small it improves the mean squared error over the best possible $(\varepsilon,\delta)$-differentially private factorization mechanisms based on Gaussian noise. Specifically, using $k=19$ we get an asymptotic improvement over the bound given in the work by Henzinger, Upadhyay and Upadhyay (SODA 2023) when $\delta = O(T^{-0.92})$. Second, we show that the noise added by the Gaussian mechanism can always be replaced by Laplace noise of comparable variance for the same $(\epsilon, \delta)$-differential privacy guarantee, and in fact for sufficiently small $\delta$ the variance of the Laplace noise becomes strictly better. This challenges the conventional wisdom that Gaussian noise should be used for high-dimensional noise. Finally, we study whether counting under continual observation may be easier in an average-case sense. We show that, under pure differential privacy, the expected worst-case error for a random input must be $\Omega(\log(T)/\varepsilon)$, matching the known lower bound for worst-case inputs.
翻译:近年来,高斯噪声已成为差分隐私算法中的常用工具,逐渐取代了早期文献中占主导地位的拉普拉斯噪声。高斯噪声是实现$\textit{近似}$差分隐私的标准方法,通常能比传统(纯)差分隐私机制获得更高的效用。本文主张,在许多场景下——特别是对于$\delta$较小的$(\varepsilon,\delta)$-差分隐私——拉普拉斯噪声实际上可能优于高斯噪声。我们考察两种场景:首先,针对持续观测下的计数问题,我们提出一种新型广义二进制树机制,该机制采用带$\textit{负进制位}$的$k$进制计数系统以优化隐私-精度权衡。我们的机制使用拉普拉斯噪声,当$\delta$足够小时,其均方误差优于基于高斯噪声的最佳$(\varepsilon,\delta)$-差分隐私分解机制。具体而言,当$k=19$且$\delta = O(T^{-0.92})$时,我们在渐近意义上改进了Henzinger、Upadhyay与Upadhyay(SODA 2023)工作中给出的误差界。其次,我们证明对于相同的$(\epsilon, \delta)$-差分隐私保障,高斯机制所添加的噪声总可被方差相当的拉普拉斯噪声替代;事实上当$\delta$足够小时,拉普拉斯噪声的方差严格更优。这对“高维噪声应使用高斯噪声”的传统认知提出了挑战。最后,我们探讨持续观测下的计数问题在平均意义上是否可能更易处理。研究表明,在纯差分隐私条件下,随机输入的期望最坏情况误差必须为$\Omega(\log(T)/\varepsilon)$,这与已知的最坏情况输入下界相匹配。