Accounting for privacy loss under fully adaptive composition -- where both the choice of mechanisms and their privacy parameters may depend on the entire history of prior outputs -- is a central challenge in differential privacy (DP). In this setting, privacy filters are stopping rules for compositions that ensure a prescribed global privacy budget is not exceeded. It remains unclear whether optimal trade-off-function-based notions, such as $f$-DP, admit valid privacy filters under fully adaptive interaction. We show that the natural approach to defining an $f$-DP filter -- composing individual trade-off curves and stopping when the prescribed $f$-DP curve is crossed -- is fundamentally invalid. We characterise when and why this failure occurs, and establish necessary and sufficient conditions under which the natural filter is valid. Furthermore, we prove a fully adaptive central limit theorem for $f$-DP and construct an approximate Gaussian DP filter for subsampled Gaussian mechanisms at small sampling rates $q<0.2$ and large sampling rates $q>0.8$, yielding tighter privacy guarantees than filters based on Rényi DP in the same setting.
翻译:在完全自适应组合下——其中机制的选择及其隐私参数可能依赖于先前输出的完整历史——量化隐私损失是差分隐私(DP)的核心挑战。在此设置下,隐私过滤器是组合过程的停止规则,用以确保不超过预设的全局隐私预算。目前尚不清楚基于最优权衡函数的概念(如$f$-DP)是否能在完全自适应交互下允许有效的隐私过滤器。我们证明了定义$f$-DP过滤器的自然方法——即组合个体权衡曲线并在超过预设$f$-DP曲线时停止——本质上是无效的。我们刻画了这种失效发生的时间与原因,并建立了该自然过滤器有效的充分必要条件。此外,我们证明了$f$-DP的完全自适应中心极限定理,并针对小子采样率$q<0.2$与大子采样率$q>0.8$的亚采样高斯机制构建了近似高斯DP过滤器,其在相同设置下提供了比基于Rényi DP的过滤器更严格的隐私保证。