The conditional sampling model, introduced by Cannone, Ron and Servedio (SODA 2014, SIAM J.\ Comput.\ 2015) and independently by Chakraborty, Fischer, Goldhirsh and Matsliah (ITCS 2013, SIAM J.\ Comput.\ 2016), is a common framework for a number of studies (and variant models) into strengthened models of distribution testing. A core task in these investigations is that of estimating the mass of individual elements. The above works, as well as the improvement of Kumar, Meel and Pote (AISTATS 2025), have all yielded polylogarithmic algorithms. In this work we shatter the polylogarithmic barrier, and provide an estimator for individual elements that uses only $O(\log \log N) + O(\mathrm{poly}(1/\varepsilon))$ conditional samples. This in particular provides an improvement (and in some cases a unifying framework) for a number of related tasks, such as testing by learning of any label-invariant property, and distance estimation between two (unknown) distribution. The work of Chakraborty, Chakraborty and Kumar (SODA 2024) contains lower bounds for some of the above tasks. We derive from their work a nearly matching lower bound of $\tilde\Omega(\log\log N)$ for the estimation task. We also show that the full power of the conditional model is indeed required for the double-logarithmic bound. For the testing of label-invariant properties, we exponentially improve the previous lower bound from double-logarithmic to $\Omega(\log N)$ conditional samples, whereas our testing by learning algorithm provides an upper bound of $O(\mathrm{poly}(1/\varepsilon)\cdot\log N \log \log N)$.
翻译:条件采样模型由Cannone、Ron和Servedio(SODA 2014,SIAM J. Comput. 2015)以及Chakraborty、Fischer、Goldhirsh和Matsliah(ITCS 2013,SIAM J. Comput. 2016)分别独立提出,是分布测试强化模型研究中多个方向(及其变体)的通用框架。这些研究的核心任务之一是估计单个元素的质量。上述工作以及Kumar、Meel和Pote的改进(AISTATS 2025)均提出了多对数复杂度的算法。本研究突破了多对数障碍,提出了一种仅需 $O(\log \log N) + O(\mathrm{poly}(1/\varepsilon))$ 次条件采样的单个元素估计器。这尤其为多项相关任务提供了改进(在某些情况下建立了统一框架),例如通过标签不变属性的学习进行测试,以及两个(未知)分布间的距离估计。Chakraborty、Chakraborty和Kumar的工作(SODA 2024)包含了上述部分任务的下界。我们基于其工作推导出估计任务近乎匹配的 $\tilde\Omega(\log\log N)$ 下界。我们还证明双对数界确实需要条件模型的完整能力。对于标签不变属性的测试,我们将先前双对数的下界指数级改进至 $\Omega(\log N)$ 次条件采样,而我们的基于学习的测试算法提供了 $O(\mathrm{poly}(1/\varepsilon)\cdot\log N \log \log N)$ 的上界。