We study approximation algorithms for Maximum Constraint Satisfaction Problems (Max-CSPs) under differential privacy (DP) where the constraints are considered sensitive data. Information-theoretically, we aim to classify the best approximation ratios possible for a given privacy budget $\varepsilon$. In the high-privacy regime ($\varepsilon \ll 1$), we show that any $\varepsilon$-DP algorithm cannot beat a random assignment by more than $O(\varepsilon)$ in the approximation ratio. We devise a polynomial-time algorithm which matches this barrier under the assumptions that the instances are bounded-degree and triangle-free. Finally, we show that one or both of these assumptions can be removed for specific CSPs--such as Max-Cut or Max $k$-XOR--albeit at the cost of computational efficiency.
翻译:本文研究在差分隐私(DP)条件下最大约束满足问题(Max-CSPs)的近似算法,其中约束被视为敏感数据。从信息论角度,我们旨在针对给定的隐私预算 $\varepsilon$ 分类可能达到的最佳近似比。在高隐私区域($\varepsilon \ll 1$),我们证明任何 $\varepsilon$-DP 算法在近似比上无法超越随机赋值超过 $O(\varepsilon)$。我们设计了一种多项式时间算法,在实例为有界度且无三角形的假设下匹配了这一界限。最后,我们表明对于特定 CSP——如最大割(Max-Cut)或最大 $k$-XOR——可以移除其中一个或两个假设,但需以计算效率为代价。