The seminal work of Linial, Mansour, and Nisan gave a quasipolynomial-time algorithm for learning constant-depth circuits ($\mathsf{AC}^0$) with respect to the uniform distribution on the hypercube. Extending their algorithm to the setting of malicious noise, where both covariates and labels can be adversarially corrupted, has remained open. Here we achieve such a result, inspired by recent work on learning with distribution shift. Our running time essentially matches their algorithm, which is known to be optimal assuming various cryptographic primitives. Our proof uses a simple outlier-removal method combined with Braverman's theorem for fooling constant-depth circuits. We attain the best possible dependence on the noise rate and succeed in the harshest possible noise model (i.e., contamination or so-called "nasty noise").
翻译:Linial、Mansour和Nisan的开创性工作给出了一种拟多项式时间算法,用于在超立方体均匀分布下学习常数深度电路($\mathsf{AC}^0$)。将其算法扩展到恶意噪声设置——其中协变量和标签都可能遭受对抗性破坏——一直是一个未解决的问题。本文借鉴了近期关于分布偏移下学习的研究成果,实现了这一目标。我们的运行时间基本匹配其原始算法,而该算法在多种密码学原语假设下已被证明是最优的。我们的证明采用了一种简单的异常值移除方法,并结合了用于欺骗常数深度电路的Braverman定理。我们实现了对噪声率的最佳可能依赖,并在最严苛的噪声模型(即污染噪声或所谓“恶意噪声”)中取得了成功。