Robust matrix completion (RMC) is a widely used machine learning tool that simultaneously tackles two critical issues in low-rank data analysis: missing data entries and extreme outliers. This paper proposes a novel scalable and learnable non-convex approach, coined Learned Robust Matrix Completion (LRMC), for large-scale RMC problems. LRMC enjoys low computational complexity with linear convergence. Motivated by the proposed theorem, the free parameters of LRMC can be effectively learned via deep unfolding to achieve optimum performance. Furthermore, this paper proposes a flexible feedforward-recurrent-mixed neural network framework that extends deep unfolding from fix-number iterations to infinite iterations. The superior empirical performance of LRMC is verified with extensive experiments against state-of-the-art on synthetic datasets and real applications, including video background subtraction, ultrasound imaging, face modeling, and cloud removal from satellite imagery.
翻译:鲁棒矩阵补全(RMC)是一种广泛使用的机器学习工具,它同时解决了低秩数据分析中的两个关键问题:缺失数据条目和极端异常值。本文提出了一种新颖的、可扩展且可学习的非凸方法,称为学习型鲁棒矩阵补全(LRMC),用于解决大规模RMC问题。LRMC具有较低的计算复杂度,并实现线性收敛。受所提出定理的启发,LRMC的自由参数可以通过深度展开技术有效学习,以达到最优性能。此外,本文提出了一种灵活的前馈-循环混合神经网络框架,将深度展开从固定迭代次数扩展到无限迭代次数。通过在合成数据集和实际应用(包括视频背景减除、超声成像、人脸建模和卫星图像云去除)上进行大量实验,并与最先进方法进行比较,验证了LRMC卓越的实证性能。