Recently, tensor fibered rank has demonstrated impressive performance by effectively leveraging the global low-rank property in all directions for low-rank tensor completion (LRTC). However, it still has some limitations. Firstly, the typical tensor fibered rank approximation based on tensor nuclear norm (TNN) processes fixed and data-independent transformation, which may not be optimal for the underlying tensor structure. Secondly, it ignores the local piecewise smoothness of the dataset. To address these limitations, we present a nonconvex learnable transformed fibered nuclear norm (NLTFNN) model for LRTC,which uses a learnable transformed fibered nuclear norm with Log-Determinant (LTFNNLog) as tensor fibered rank approximation, and employs a total variation (TV) regularization to explore local piecewise smoothness. An efficient algorithm based on the alternating direction method of multipliers (ADMM) is developed to solve NLTFNN and the convergence of the algorithm is proved theoretically. Experiments on various datasets show the superiority of NLTFNN over several existing methods.
翻译:近年来,张量纤维秩通过有效利用所有方向上的全局低秩特性,在低秩张量补全中展现出卓越性能。然而,该方法仍存在若干局限性。首先,基于张量核范数的典型张量纤维秩逼近采用固定且与数据无关的变换,这可能无法最优适应底层张量结构。其次,该方法忽略了数据集的局部分段平滑特性。为克服这些局限,本文提出一种用于低秩张量补全的非凸可学习变换纤维核范数模型。该模型采用基于对数行列式的可学习变换纤维核范数作为张量纤维秩逼近,并引入全变分正则化以挖掘局部分段平滑特性。我们基于交替方向乘子法设计了高效求解算法,并从理论上证明了算法的收敛性。在多组数据集上的实验表明,该模型优于现有多种方法。