In this work, we consider the problem of learning one hidden layer ReLU neural networks with inputs from $\mathbb{R}^d$. We show that this learning problem is hard under standard cryptographic assumptions even when: (1) the size of the neural network is polynomial in $d$, (2) its input distribution is a standard Gaussian, and (3) the noise is Gaussian and polynomially small in $d$. Our hardness result is based on the hardness of the Continuous Learning with Errors (CLWE) problem, and in particular, is based on the largely believed worst-case hardness of approximately solving the shortest vector problem up to a multiplicative polynomial factor.
翻译:在本工作中,我们研究了从$\mathbb{R}^d$中采样输入的单隐藏层ReLU神经网络的学习问题。我们证明,即使满足以下条件,该学习问题在标准密码学假设下仍然是困难的:(1) 神经网络的规模为$d$的多项式,(2) 其输入分布为标准高斯分布,且(3) 噪声为高斯分布且幅度为$d$的多项式小量。我们的困难性结果基于连续学习带误差(CLWE)问题的困难性,特别是基于被广泛相信的最坏情况下近似求解最短向量问题至多项式倍乘因子的困难性。