Recent advances in Symmetric Positive Definite (SPD) matrix learning show that Riemannian metrics are fundamental to effective SPD neural networks. Motivated by this, we revisit the geometry of the Cholesky factors and uncover a simple product structure that enables convenient metric design. Building on this insight, we propose two fast and stable SPD metrics, Power--Cholesky Metric (PCM) and Bures--Wasserstein--Cholesky Metric (BWCM), derived via Cholesky decomposition. Compared with existing SPD metrics, the proposed metrics provide closed-form operators, computational efficiency, and improved numerical stability. We further apply our metrics to construct Riemannian Multinomial Logistic Regression (MLR) classifiers and residual blocks for SPD neural networks. Experiments on SPD deep learning, numerical stability analyses, and tensor interpolation demonstrate the effectiveness, efficiency, and robustness of our metrics. The code is available at https://github.com/GitZH-Chen/PCM_BWCM.
翻译:对称正定矩阵学习的最新进展表明,黎曼度量是构建有效SPD神经网络的基础。受此启发,我们重新审视Cholesky因子的几何结构,发现了一种可实现便捷度量设计的简单乘积结构。基于这一洞见,我们通过Cholesky分解提出了两种快速稳定的SPD度量:幂-乔列斯基度量与Bures-Wasserstein-乔列斯基度量。与现有SPD度量相比,所提度量具有闭式算子、计算高效和数值稳定性更优的特点。我们进一步应用所提度量构建了黎曼多项逻辑回归分类器及SPD神经网络的残差块。在SPD深度学习、数值稳定性分析和张量插值上的实验验证了所提度量的有效性、高效性与鲁棒性。代码发布于https://github.com/GitZH-Chen/PCM_BWCM。