Eigenmaps are important in analysis, geometry, and machine learning, especially in nonlinear dimension reduction. Approximation of the eigenmaps of a Laplace operator depends crucially on the scaling parameter $\epsilon$. If $\epsilon$ is too small or too large, then the approximation is inaccurate or completely breaks down. However, an analytic expression for the optimal $\epsilon$ is out of reach. In our work, we use some explicitly solvable models and Monte Carlo simulations to find the approximately optimal range of $\epsilon$ that gives, on average, relatively accurate approximation of the eigenmaps. Numerically we can consider several model situations where eigen-coordinates can be computed analytically, including intervals with uniform and weighted measures, squares, tori, spheres, and the Sierpinski gasket. In broader terms, we intend to study eigen-coordinates on weighted Riemannian manifolds, possibly with boundary, and on some metric measure spaces, such as fractals.
翻译:特征映射在分析学、几何学及机器学习中具有重要意义,尤其在非线性降维领域。拉普拉斯算子特征映射的逼近关键取决于尺度参数$\epsilon$。若$\epsilon$过小或过大,逼近结果将不精确甚至完全失效。然而,最优$\epsilon$的解析表达式难以获得。本研究通过若干显式可解模型与蒙特卡洛模拟,确定了能平均给出相对精确特征映射逼近的$\epsilon$近似最优取值范围。数值上我们考察了若干可解析计算特征坐标的模型场景,包括均匀测度与加权测度区间、正方形、环面、球面及谢尔宾斯基垫片。更广泛而言,我们致力于研究加权黎曼流形(可能带边界)及某些度量测度空间(如分形)上的特征坐标。