We study algorithms for the Schatten-$p$ Low Rank Approximation (LRA) problem. First, we show that by using fast rectangular matrix multiplication algorithms and different block sizes, we can improve the running time of the algorithms in the recent work of Bakshi, Clarkson and Woodruff (STOC 2022). We then show that by carefully combining our new algorithm with the algorithm of Li and Woodruff (ICML 2020), we can obtain even faster algorithms for Schatten-$p$ LRA. While the block-based algorithms are fast in the real number model, we do not have a stability analysis which shows that the algorithms work when implemented on a machine with polylogarithmic bits of precision. We show that the LazySVD algorithm of Allen-Zhu and Li (NeurIPS 2016) can be implemented on a floating point machine with only logarithmic, in the input parameters, bits of precision. As far as we are aware, this is the first stability analysis of any algorithm using $O((k/\sqrt{\varepsilon})\text{poly}(\log n))$ matrix-vector products with the matrix $A$ to output a $1+\varepsilon$ approximate solution for the rank-$k$ Schatten-$p$ LRA problem.
翻译:我们研究Schatten-$p$低秩逼近(LRA)问题的算法。首先,我们证明通过使用快速矩形矩阵乘法算法和不同的分块尺寸,可以改进Bakshi、Clarkson和Woodruff(STOC 2022)近期工作中算法的运行时间。随后我们证明,通过将新算法与Li和Woodruff(ICML 2020)的算法进行精细结合,能够为Schatten-$p$ LRA问题获得更快的算法。虽然基于分块的算法在实数模型中速度很快,但我们尚未进行稳定性分析以证明这些算法在具有多对数精度位数的机器上实现时仍能有效工作。我们证明了Allen-Zhu和Li(NeurIPS 2016)提出的LazySVD算法可以在仅需输入参数对数级精度位数的浮点机器上实现。据我们所知,这是首个对使用$O((k/\sqrt{\varepsilon})\text{poly}(\log n))$次矩阵$A$的矩阵-向量乘积来输出秩-$k$ Schatten-$p$ LRA问题的$1+\varepsilon$近似解算法的稳定性分析。