We resolve an open question of Jain, Raskhodnikova, Sivakumar, and Smith (ICML 2023) by exhibiting a problem separating differential privacy under continual observation in the oblivious and adaptive settings. The continual observation (a.k.a. continual release) model formalizes privacy for streaming algorithms, where data is received over time and output is released at each time step. In the oblivious setting, privacy need only hold for data streams fixed in advance; in the adaptive setting, privacy is required even for streams that can be chosen adaptively based on the streaming algorithm's output. We describe the first explicit separation between the oblivious and adaptive settings. The problem showing this separation is based on the correlated vector queries problem of Bun, Steinke, and Ullman (SODA 2017). Specifically, we present an $(\varepsilon,0)$-DP algorithm for the oblivious setting that remains accurate for exponentially many time steps in the dimension of the input. On the other hand, we show that every $(\varepsilon,δ)$-DP adaptive algorithm fails to be accurate after releasing output for only a constant number of time steps.
翻译:我们通过展示一个在持续观测差分隐私的无感知与自适应设置下存在分离的问题,解决了Jain、Raskhodnikova、Sivakumar和Smith(ICML 2023)提出的一个开放性问题。持续观测(亦称持续发布)模型形式化了流式算法的隐私保护问题,其中数据随时间到达,并在每个时间步发布输出。在无感知设置中,隐私性仅需对预先固定的数据流成立;而在自适应设置中,隐私性要求即使对于能够根据流式算法的输出自适应选择的流也需成立。我们首次明确描述了无感知与自适应设置之间的分离。展示这一分离的问题基于Bun、Steinke和Ullman(SODA 2017)提出的相关向量查询问题。具体而言,我们为无感知设置提出了一种$(\varepsilon,0)$-DP算法,该算法在输入维度的指数级时间步内保持准确性。另一方面,我们证明了任何$(\varepsilon,\delta)$-DP自适应算法在仅发布常数个时间步的输出后即无法保持准确性。