In this work, we investigate Riemannian geometry based dimensionality reduction methods that respect the underlying manifold structure of the data. In particular, we focus on Principal Geodesic Analysis (PGA) as a nonlinear generalization of PCA for manifold valued data, and extend discriminant analysis through Riemannian adaptations of other known dimensionality reduction methods. These approaches exploit geodesic distances, tangent space representations, and intrinsic statistical measures to achieve more faithful low dimensional embeddings. We also discuss related manifold learning techniques and highlight their theoretical foundations and practical advantages. Experimental results on representative datasets demonstrate that Riemannian methods provide improved representation quality and classification performance compared to their Euclidean counterparts, especially for data constrained to curved spaces such as hyperspheres and symmetric positive definite manifolds. This study underscores the importance of geometry aware dimensionality reduction in modern machine learning and data science applications.
翻译:本文研究了基于黎曼几何的降维方法,这些方法能够保持数据的底层流形结构。特别地,我们聚焦于主测地线分析(PGA)作为针对流形值数据的非线性主成分分析(PCA)推广,并通过其他已知降维方法的黎曼适应形式扩展了判别分析。这些方法利用测地线距离、切空间表示和内在统计度量来实现更忠实的低维嵌入。我们还讨论了相关的流形学习技术,并强调了它们的理论基础和实际优势。在代表性数据集上的实验结果表明,与欧几里得方法相比,黎曼方法提供了更高的表示质量和分类性能,特别是对于约束在弯曲空间(如超球面和对称正定流形)中的数据。本研究强调了在现代机器学习和数据科学应用中几何感知降维的重要性。