Parities have become a standard benchmark for evaluating learning algorithms. Recent works show that regular neural networks trained by gradient descent can efficiently learn degree $k$ parities on uniform inputs for constant $k$, but fail to do so when $k$ and $d-k$ grow with $d$ (here $d$ is the ambient dimension). However, the case where $k=d-O_d(1)$ (almost-full parities), including the degree $d$ parity (the full parity), has remained unsettled. This paper shows that for gradient descent on regular neural networks, learnability depends on the initial weight distribution. On one hand, the discrete Rademacher initialization enables efficient learning of almost-full parities, while on the other hand, its Gaussian perturbation with large enough constant standard deviation $\sigma$ prevents it. The positive result for almost-full parities is shown to hold up to $\sigma=O(d^{-1})$, pointing to questions about a sharper threshold phenomenon. Unlike statistical query (SQ) learning, where a singleton function class like the full parity is trivially learnable, our negative result applies to a fixed function and relies on an initial gradient alignment measure of potential broader relevance to neural networks learning.
翻译:奇偶函数已成为评估学习算法的标准基准。近期研究表明,通过梯度下降训练的正则神经网络能够高效学习均匀输入上的常数阶 $k$ 奇偶函数,但当 $k$ 与 $d-k$ 随 $d$ 增长时(此处 $d$ 为环境维度)则无法实现。然而,对于 $k=d-O_d(1)$(近似完全奇偶函数)的情形,包括 $d$ 阶奇偶函数(完全奇偶函数),其可学习性仍未得到解决。本文证明,对于正则神经网络的梯度下降训练,可学习性取决于初始权重分布:一方面,离散拉德马赫初始化能够高效学习近似完全奇偶函数;另一方面,对其施加标准差 $\sigma$ 足够大的高斯扰动则会阻碍学习过程。对近似完全奇偶函数的正向结果在 $\sigma=O(d^{-1})$ 范围内成立,这指向了关于更尖锐阈值现象的问题。与统计查询学习不同——在统计查询学习中,像完全奇偶函数这样的单元素函数类是平凡可学习的——我们的负向结果适用于固定函数,并基于初始梯度对齐度量,该度量对神经网络学习可能具有更广泛的关联意义。