Universal approximation theorems show that neural networks can approximate any continuous function; however, the number of parameters may grow exponentially with the ambient dimension, so these results do not fully explain the practical success of deep models on high-dimensional data. Barron space theory addresses this: if a target function belongs to a Barron space, a two-layer network with $n$ parameters achieves an $O(n^{-1/2})$ approximation error in $L^2$. Yet classical Barron spaces $\mathscr{B}^{s+1}$ still require stronger regularity than Sobolev spaces $H^s$, and existing depth-sensitive results often assume constraints such as $sL \le 1/2$. In this paper, we introduce a log-weighted Barron space $\mathscr{B}^{\log}$, which requires a strictly weaker assumption than $\mathscr{B}^s$ for any $s>0$. For this new function space, we first study embedding properties and carry out a statistical analysis via the Rademacher complexity. Then we prove that functions in $\mathscr{B}^{\log}$ can be approximated by deep ReLU networks with explicit depth dependence. We then define a family $\mathscr{B}^{s,\log}$, establish approximation bounds in the $H^1$ norm, and identify maximal depth scales under which these rates are preserved. Our results clarify how depth reduces regularity requirements for efficient representation, offering a more precise explanation for the performance of deep architectures beyond the classical Barron setting, and for their stable use in high-dimensional problems used today.
翻译:通用逼近定理表明,神经网络可以逼近任意连续函数;然而,参数数量可能随环境维度指数增长,因此这些结果并未完全解释深度模型在高维数据上的实际成功。巴伦空间理论解决了这一问题:若目标函数属于某个巴伦空间,则具有$n$个参数的两层网络可在$L^2$范数下达到$O(n^{-1/2})$的逼近误差。然而经典巴伦空间$\mathscr{B}^{s+1}$仍要求比索伯列夫空间$H^s$更强的正则性,且现有对深度敏感的结果常假设诸如$sL \le 1/2$的约束。本文引入对数加权巴伦空间$\mathscr{B}^{\log}$,其对任意$s>0$的要求均严格弱于$\mathscr{B}^s$。针对这一新函数空间,我们首先研究其嵌入性质,并通过Rademacher复杂度进行统计分析。随后证明$\mathscr{B}^{\log}$中的函数可由深度ReLU网络以显式深度依赖的方式逼近。进而定义函数族$\mathscr{B}^{s,\log}$,建立$H^1$范数下的逼近界,并确定保持这些收敛率的最大深度尺度。我们的结果阐明了深度如何降低高效表示所需的正则性要求,为超越经典巴伦框架的深度架构性能及其在当前高维问题中的稳定应用提供了更精确的解释。