We derive universal approximation results for the class of (countably) $m$-rectifiable measures. Specifically, we prove that $m$-rectifiable measures can be approximated as push-forwards of the one-dimensional Lebesgue measure on $[0,1]$ using ReLU neural networks with arbitrarily small approximation error in terms of Wasserstein distance. What is more, the weights in the networks under consideration are quantized and bounded and the number of ReLU neural networks required to achieve an approximation error of $\varepsilon$ is no larger than $2^{b(\varepsilon)}$ with $b(\varepsilon)=\mathcal{O}(\varepsilon^{-m}\log^2(\varepsilon))$. This result improves Lemma IX.4 in Perekrestenko et al. as it shows that the rate at which $b(\varepsilon)$ tends to infinity as $\varepsilon$ tends to zero equals the rectifiability parameter $m$, which can be much smaller than the ambient dimension. We extend this result to countably $m$-rectifiable measures and show that this rate still equals the rectifiability parameter $m$ provided that, among other technical assumptions, the measure decays exponentially on the individual components of the countably $m$-rectifiable support set.
翻译:我们推导了(可数)$m$-可求长测度类的通用逼近结果。具体而言,我们证明了$m$-可求长测度可以通过ReLU神经网络作为$[0,1]$上一维勒贝格测度的推前映射来逼近,且在Wasserstein距离意义下能以任意小的误差实现逼近。更重要的是,所考虑网络中的权重是量化且有界的,且实现$\varepsilon$逼近误差所需的ReLU神经网络数量不超过$2^{b(\varepsilon)}$,其中$b(\varepsilon)=\mathcal{O}(\varepsilon^{-m}\log^2(\varepsilon))$。该结果改进了Perekrestenko等人论文中的引理IX.4,因为它表明当$\varepsilon$趋于零时$b(\varepsilon)$趋于无穷的速率等于可求长参数$m$,而该参数可能远小于环境维度。我们将此结果推广至可数$m$-可求长测度,并证明在满足其他技术假设(包括测度在可数$m$-可求长支撑集的各个分量上呈指数衰减)的条件下,该速率仍等于可求长参数$m$。