We present a new paradigm for creating random features to approximate bi-variate functions (in particular, kernels) defined on general manifolds. This new mechanism of Manifold Random Features (MRFs) leverages discretization of the manifold and the recently introduced technique of Graph Random Features (GRFs) to learn continuous fields on manifolds. Those fields are used to find continuous approximation mechanisms that otherwise, in general scenarios, cannot be derived analytically. MRFs provide positive and bounded features, a key property for accurate, low-variance approximation. We show deep asymptotic connection between GRFs, defined on discrete graph objects, and continuous random features used for regular kernels. As a by-product of our method, we re-discover recently introduced mechanism of Gaussian kernel approximation applied in particular to improve linear-attention Transformers, considering simple random walks on graphs and by-passing original complex mathematical computations. We complement our algorithm with a rigorous theoretical analysis and verify in thorough experimental studies.
翻译:我们提出了一种新范式,用于创建随机特征以近似定义在一般流形上的双变量函数(特别是核函数)。这种流形随机特征(MRFs)的新机制利用流形的离散化和最近引入的图随机特征(GRFs)技术来学习流形上的连续场。这些场用于寻找连续逼近机制,而在一般情况下,这些机制通常无法通过解析方法推导。MRFs提供了正且有界的特征,这是实现精确、低方差逼近的关键特性。我们揭示了定义在离散图对象上的GRFs与用于常规核的连续随机特征之间的深层渐近联系。作为我们方法的副产品,我们重新发现了最近引入的高斯核逼近机制——该机制通过考虑图上的简单随机游走并绕过原始复杂数学计算,被特别应用于改进线性注意力Transformer。我们通过严格的理论分析对算法进行了补充,并在详尽的实验研究中进行了验证。