The conditional sampling model, introduced by Cannone, Ron and Servedio (SODA 2014, SIAM J. Comput. 2015) and independently by Chakraborty, Fischer, Goldhirsh and Matsliah (ITCS 2013, SIAM J. Comput. 2016), is a common framework for a number of studies concerning strengthened models of distribution testing. A core task in these investigations is that of estimating the mass of individual elements. The above mentioned works, and the improvement of Kumar, Meel and Pote (AISTATS 2025), provided polylogarithmic algorithms for this task. In this work we shatter the polylogarithmic barrier, and provide an estimator for the mass of individual elements that uses only $O(\log \log N) + O(\mathrm{poly}(1/\varepsilon))$ conditional samples. We complement this result with an $\Omega(\log\log N)$ lower bound. We then show that our mass estimator provides an improvement (and in some cases a unifying framework) for a number of related tasks, such as testing by learning of any label-invariant property, and distance estimation between two (unknown) distribution. By considering some known lower bounds, this also shows that the full power of the conditional model is indeed required for the doubly-logarithmic upper bound. Finally, we exponentially improve the previous lower bound on testing by learning of label-invariant properties from double-logarithmic to $\Omega(\log N)$ conditional samples, whereas our testing by learning algorithm provides an upper bound of $O(\mathrm{poly}(1/\varepsilon)\cdot\log N \log \log N)$.
翻译:条件采样模型由Cannone、Ron和Servedio(SODA 2014,SIAM J. Comput. 2015)以及Chakraborty、Fischer、Goldhirsh和Matsliah(ITCS 2013,SIAM J. Comput. 2016)分别独立提出,是研究强化分布测试模型的通用框架。这些研究中的一个核心任务是估计单个元素的质量。上述工作及Kumar、Meel和Pote的改进(AISTATS 2025)为该任务提供了多对数时间算法。本工作中,我们突破了多对数障碍,提出了一种仅需$O(\log \log N) + O(\mathrm{poly}(1/\varepsilon))$次条件采样的单个元素质量估计器。我们通过$\Omega(\log\log N)$的下界结果对此进行了补充。随后我们证明,该质量估计器为一系列相关任务(例如基于学习的任意标签不变属性测试,以及两个(未知)分布间的距离估计)提供了改进(在某些情况下提供了统一框架)。结合已知下界结果,这进一步表明双对数上界确实需要条件模型的完整能力。最后,我们将基于学习的标签不变属性测试的原有下界从双对数级指数级提升至$\Omega(\log N)$次条件采样,而我们的基于学习测试算法提供了$O(\mathrm{poly}(1/\varepsilon)\cdot\log N \log \log N)$的上界。