We explain how to use Kolmogorov Superposition Theorem (KST) to break the curse of dimensionality when approximating a dense class of multivariate continuous functions. We first show that there is a class of functions called Kolmogorov-Lipschitz (KL) continuous in $C([0,1]^d)$ which can be approximated by a special ReLU neural network of two hidden layers with a dimension independent approximation rate $O(1/n)$ with approximation constant increasing quadratically in $d$. The number of parameters used in such neural network approximation equals to $(6d+2)n$. Next we introduce KB-splines by using linear B-splines to replace the outer function and smooth the KB-splines to have the so-called LKB-splines as the basis for approximation. Our numerical evidence shows that the curse of dimensionality is broken in the following sense: When using the standard discrete least squares (DLS) method to approximate a continuous function, there exists a pivotal set of points in $[0,1]^d$ with size at most $O(nd)$ such that the rooted mean squares error (RMSE) from the DLS based on the pivotal set is similar to the RMSE of the DLS based on the original set with size $O(n^d)$. The pivotal point set is chosen by using matrix cross approximation technique and the number of LKB-splines used for approximation is the same as the size of the pivotal data set. Therefore, we do not need too many basis functions as well as too many function values to approximate a high dimensional continuous function $f$.
翻译:本文阐释了如何利用Kolmogorov叠加定理(KST)在逼近稠密类多元连续函数时破解维度灾难。我们首先证明,在$C([0,1]^d)$中存在一类称为Kolmogorov-Lipschitz(KL)连续的函数,其可通过具有两个隐藏层的特殊ReLU神经网络以与维度无关的逼近速率$O(1/n)$进行逼近,且逼近常数随$d$呈二次增长。此类神经网络逼近所用参数数量为$(6d+2)n$。随后,我们通过用线性B样条替代外层函数并平滑化处理,引入KB样条,进而构造称为LKB样条的基函数用于逼近。数值证据表明,维度灾难在以下意义上被破解:当采用标准离散最小二乘法(DLS)逼近连续函数时,存在$[0,1]^d$中规模至多为$O(nd)$的关键点集,使得基于该关键点集的DLS所得均方根误差(RMSE)与基于原始$O(n^d)$规模点集的DLS的RMSE相当。关键点集通过矩阵交叉逼近技术选取,且用于逼近的LKB样条数量与关键数据集规模相同。因此,我们无需过多基函数及函数值即可实现高维连续函数$f$的有效逼近。