We study the utility and limitations of using $k$-uniform hypergraphs $H = ([n], E)$ ($n \ge \mathrm{poly}(k)$) in the context of error reduction for randomized algorithms for decision problems with one- or two-sided error. Our error reduction idea is sampling a uniformly random hyperedge of $H$, and repeating the algorithm $k$ times using the hyperedge vertices as seeds. This is a general paradigm, which captures every pseudorandom method generating $k$ seeds without repetition. We show two results which imply a gap between the typical and the worst-case behavior of using $H$ for error-reduction. First, in the context of one-sided error reduction, if using a random hyperedge of $H$ decreases the error probability from $p$ to $p^k + ε$, then $H$ cannot have too few edges, i.e., $|E| = Ω(n k^{-1} ε^{-1})$. Thus, the number of random bits needed for reducing the error from $p$ to $p^k + ε$ cannot be reduced below $\lg n+\lg(ε^{-1})-\lg k+O(1)$. This is also true for hypergraphs of average uniformity $k$. Our result implies new lower bounds for dispersers and vertex-expanders. Second, if the vertex degrees are reasonably distributed, we show that in a $(1-o(1))$-fraction of the cases, choosing $k$ pseudorandom seeds using $H$ will reduce the error probability to at most $o(1)$ above the error probability of using $k$ IID seeds, for both algorithms with one- or two-sided error. Thus, despite our lower bound, for a $(1-o(1))$-fraction of randomized algorithms (and inputs) for decision problems, the advantage of using IID samples over samples obtained from a uniformly random edge of a reasonable hypergraph is negligible. A similar statement holds true for randomized algorithms with two-sided error.
翻译:我们研究了在决策问题的单侧或双侧错误随机算法的错误缩减背景下,使用$k$-一致超图$H = ([n], E)$(其中$n \ge \mathrm{poly}(k)$)的效用与局限性。我们的错误缩减思想是:采样$H$的一个均匀随机超边,并利用该超边的顶点作为种子,将算法重复运行$k$次。这是一个通用范式,它涵盖了所有生成$k$个不重复种子的伪随机方法。我们展示了两个结果,它们揭示了使用$H$进行错误缩减时典型行为与最坏情况行为之间的差距。首先,在单侧错误缩减的背景下,如果使用$H$的一个随机超边能将错误概率从$p$降低至$p^k + ε$,那么$H$的边数不能太少,即$|E| = Ω(n k^{-1} ε^{-1})$。因此,将错误从$p$降低至$p^k + ε$所需的随机比特数不能低于$\lg n+\lg(ε^{-1})-\lg k+O(1)$。这对于平均一致性为$k$的超图也成立。我们的结果意味着对分散器和顶点扩展图的新下界。其次,如果顶点度分布合理,我们证明在$(1-o(1))$比例的情况下,对于单侧或双侧错误算法,使用$H$选择$k$个伪随机种子将能把错误概率降低至最多比使用$k$个独立同分布种子时的错误概率高出$o(1)$。因此,尽管存在我们的下界,对于决策问题中$(1-o(1))$比例的随机算法(及输入),使用独立同分布样本相对于从一个合理超图的均匀随机边获得的样本的优势是可忽略的。对于双侧错误的随机算法,类似的结论也成立。