Smooth activation functions are ubiquitous in modern deep learning, yet their theoretical advantages over non-smooth counterparts remain poorly understood. In this work, we study both approximation and statistical properties of neural networks with smooth activations for learning functions in the Sobolev space $W^{s,\infty}([0,1]^d)$ with $s>0$. We prove that constant-depth networks equipped with smooth activations achieve smoothness adaptivity: increasing width alone suffices to attain the minimax-optimal approximation and estimation error rates (up to logarithmic factors). In contrast, for non-smooth activations such as ReLU, smoothness adaptivity is fundamentally limited by depth: the attainable approximation order is bounded by depth, and higher-order smoothness requires proportional depth growth. These results identify activation smoothness as a fundamental mechanism, complementary to depth, for achieving optimal rates over Sobolev function classes. Technically, our analysis is based on a multi-scale approximation framework that yields explicit neural network approximators with controlled parameter norms and model size. This complexity control ensures statistical learnability under empirical risk minimization (ERM) and avoids the impractical $\ell^0$-sparsity constraints commonly required in prior analyses.
翻译:光滑激活函数在现代深度学习中无处不在,然而相较于非光滑激活函数,其理论优势仍未得到充分理解。本文研究了具有光滑激活函数的神经网络在Sobolev空间$W^{s,\infty}([0,1]^d)$(其中$s>0$)中学习函数的逼近与统计特性。我们证明,配备光滑激活函数的恒定深度网络能够实现平滑度自适应性:仅增加网络宽度便足以达到极小极大最优的逼近与估计误差速率(至多相差对数因子)。相比之下,对于ReLU等非光滑激活函数,平滑度自适应性从根本上受到深度的限制:可达到的逼近阶数受深度约束,更高阶的平滑性要求深度成比例增长。这些结果表明,激活函数的光滑性是实现Sobolev函数类上最优速率的根本机制,与深度形成互补。在技术上,我们的分析基于一种多尺度逼近框架,该框架能够构造出具有可控参数范数与模型规模的显式神经网络逼近器。这种复杂度控制确保了在经验风险最小化(ERM)下的统计可学习性,并避免了先前分析中通常需要的、不切实际的$\ell^0$-稀疏性约束。