The conditional sampling model, introduced by Cannone, Ron and Servedio (SODA 2014, SIAM J. Comput. 2015) and independently by Chakraborty, Fischer, Goldhirsh and Matsliah (ITCS 2013, SIAM J. Comput. 2016), is a common framework for a number of studies concerning strengthened models of distribution testing. A core task in these investigations is that of estimating the mass of individual elements. The above mentioned works, and the improvement of Kumar, Meel and Pote (AISTATS 2025), provided polylogarithmic algorithms for this task. In this work we shatter the polylogarithmic barrier, and provide an estimator for the mass of individual elements that uses only $O(\log \log N) + O(\mathrm{poly}(1/\varepsilon))$ conditional samples. We complement this result with an $\Omega(\log\log N)$ lower bound. We then show that our mass estimator provides an improvement (and in some cases a unifying framework) for a number of related tasks, such as testing by learning of any label-invariant property, and distance estimation between two (unknown) distribution. By considering some known lower bounds, this also shows that the full power of the conditional model is indeed required for the doubly-logarithmic upper bound. Finally, we exponentially improve the previous lower bound on testing by learning of label-invariant properties from double-logarithmic to $\Omega(\log N)$ conditional samples, whereas our testing by learning algorithm provides an upper bound of $O(\mathrm{poly}(1/\varepsilon)\cdot\log N \log \log N)$.
翻译:条件采样模型由Cannone、Ron和Servedio(SODA 2014,SIAM J. Comput. 2015)以及Chakraborty、Fischer、Goldhirsh和Matsliah(ITCS 2013,SIAM J. Comput. 2016)分别独立提出,是强化分布测试模型研究中广泛采用的框架。此类研究的核心任务之一是估计单个元素的质量。上述工作及Kumar、Meel和Pote的改进(AISTATS 2025)为此任务提供了多对数阶算法。本工作突破了多对数阶障碍,提出了一种仅需$O(\log \log N) + O(\mathrm{poly}(1/\varepsilon))$次条件采样的单个元素质量估计器。我们进一步以$\Omega(\log\log N)$下界作为该结果的补充。随后证明,我们的质量估计器为多项相关任务提供了改进(在某些情况下建立了统一框架),例如基于学习的任意标签不变属性测试,以及两个(未知)分布间的距离估计。结合已知下界分析,这亦表明双对数阶上界确实需要条件模型的完整能力。最后,我们将基于学习的标签不变属性测试的下界从双对数阶指数级提升至$\Omega(\log N)$次条件采样,而我们的基于学习测试算法提供了$O(\mathrm{poly}(1/\varepsilon)\cdot\log N \log \log N)$的上界。