The singular value decomposition (SVD) allows to put a matrix as a product of three matrices: a matrix with the left singular vectors, a matrix with the positive-valued singular values and a matrix with the right singular vectors. There are two main approaches allowing to get the SVD result: the classical method and the randomized method. The analysis of the classical approach leads to accurate singular values. The randomized approach is especially used for high dimensional matrix and is based on the approximation accuracy without computing necessary all singular values. In this paper, the SVD computation is formalized as an optimization problem and a use of the gradient search algorithm. That results in a power method allowing to get all or the first largest singular values and their associated right vectors. In this iterative search, the accuracy on the singular values and the associated vector matrix depends on the user settings. Two applications of the SVD are the principal component analysis and the autoencoder used in the neural network models.
翻译:奇异值分解(SVD)允许将一个矩阵表示为三个矩阵的乘积:一个包含左奇异向量的矩阵、一个包含正奇异值的矩阵以及一个包含右奇异向量的矩阵。获取SVD结果主要有两种方法:经典方法和随机化方法。经典方法的分析可得到精确的奇异值。随机化方法尤其适用于高维矩阵,其基于近似精度,无需计算全部奇异值。本文将SVD计算形式化为一个优化问题,并采用梯度搜索算法。由此产生了一种幂方法,能够获取全部或前几个最大的奇异值及其对应的右向量。在此迭代搜索中,奇异值及相关向量矩阵的精度取决于用户设置。SVD的两个应用实例是主成分分析以及神经网络模型中使用的自编码器。