Let $\Omega = [0,1]^d$ be the unit cube in $\mathbb{R}^d$. We study the problem of how efficiently, in terms of the number of parameters, deep neural networks with the ReLU activation function can approximate functions in the Sobolev spaces $W^s(L_q(\Omega))$ and Besov spaces $B^s_r(L_q(\Omega))$, with error measured in the $L_p(\Omega)$ norm. This problem is important when studying the application of neural networks in a variety of fields, including scientific computing and signal processing, and has previously been solved only when $p=q=\infty$. Our contribution is to provide a complete solution for all $1\leq p,q\leq \infty$ and $s > 0$ for which the corresponding Sobolev or Besov space compactly embeds into $L_p$. The key technical tool is a novel bit-extraction technique which gives an optimal encoding of sparse vectors. This enables us to obtain sharp upper bounds in the non-linear regime where $p > q$. We also provide a novel method for deriving $L_p$-approximation lower bounds based upon VC-dimension when $p < \infty$. Our results show that very deep ReLU networks significantly outperform classical methods of approximation in terms of the number of parameters, but that this comes at the cost of parameters which are not encodable.
翻译:设$\Omega = [0,1]^d$为$\mathbb{R}^d$中的单位立方体。我们研究在$L_p(\Omega)$范数误差度量下,使用ReLU激活函数的深度神经网络对Sobolev空间$W^s(L_q(\Omega))$和Besov空间$B^s_r(L_q(\Omega))$中函数的逼近效率(以参数数量衡量)。该问题对于研究神经网络在科学计算和信号处理等领域的应用具有重要意义,此前仅当$p=q=\infty$时获得解决。我们的贡献在于为所有满足相应Sobolev或Besov空间紧嵌入$L_p$条件的$1\leq p,q\leq \infty$和$s>0$提供完整解。关键技术工具是一种新型位提取技术,可实现对稀疏向量的最优编码,从而在$p>q$的非线性区域获得尖锐上界。我们还提出基于VC维的$L_p$逼近下界推导新方法(适用于$p<\infty$)。研究结果表明,极深ReLU网络在参数数量方面显著优于经典逼近方法,但这是以参数不可编码为代价的。