The article concerns low-rank approximation of matrices generated by sampling a smooth function of two $m$-dimensional variables. We refute an argument made in the literature that, for a specific class of analytic functions, such matrices admit accurate entrywise approximation of rank that is independent of $m$. We provide a theoretical explanation of the numerical results presented in support of this argument, describing three narrower classes of functions for which $n \times n$ function-generated matrices can be approximated within an entrywise error of order $\varepsilon$ with rank $\mathcal{O}(\log(n) \varepsilon^{-2} \mathrm{polylog}(\varepsilon^{-1}))$ that is independent of the dimension $m$: (i) functions of the inner product of the two variables, (ii) functions of the squared Euclidean distance between the variables, and (iii) shift-invariant positive-definite kernels. We extend our argument to low-rank tensor-train approximation of tensors generated with functions of the multi-linear product of their $m$-dimensional variables. We discuss our results in the context of low-rank approximation of attention in transformer neural networks.
翻译:本文研究由采样二元$m$维变量的光滑函数所生成矩阵的低秩逼近问题。针对文献中提出的某一类解析函数所生成矩阵存在与$m$无关的精确逐元素低秩逼近的论断,我们提出了反驳。通过理论分析,我们解释了支持该论断的数值实验结果,并描述了以下三类更具体的函数族:对于这些函数族生成的$n \times n$矩阵,可在与维度$m$无关的情况下,以$\mathcal{O}(\log(n) \varepsilon^{-2} \mathrm{polylog}(\varepsilon^{-1}))$的秩实现$\varepsilon$量级的逐元素误差逼近:(i) 两变量内积的函数,(ii) 变量间欧氏距离平方的函数,以及(iii) 平移不变的正定核函数。我们将此论证推广到由多维变量内积函数所生成张量的低秩张量链逼近。最后,我们在Transformer神经网络注意力机制的低秩逼近背景下讨论了本研究结果。