We study denoising of a third-order tensor when the ground-truth tensor is not necessarily Tucker low-rank. Specifically, we observe $$ Y=X^\ast+Z\in \mathbb{R}^{p_{1} \times p_{2} \times p_{3}}, $$ where $X^\ast$ is the ground-truth tensor, and $Z$ is the noise tensor. We propose a simple variant of the higher-order tensor SVD estimator $\widetilde{X}$. We show that uniformly over all user-specified Tucker ranks $(r_{1},r_{2},r_{3})$, $$ \| \widetilde{X} - X^* \|_{ \mathrm{F}}^2 = O \Big( κ^2 \Big\{ r_{1}r_{2}r_{3}+\sum_{k=1}^{3} p_{k} r_{k} \Big\} \; + \; ξ_{(r_{1},r_{2},r_{3})}^2\Big) \quad \text{ with high probability.} $$ Here, the bias term $ξ_{(r_1,r_2,r_3)}$ corresponds to the best achievable approximation error of $X^\ast$ over the class of tensors with Tucker ranks $(r_1,r_2,r_3)$; $κ^2$ quantifies the noise level; and the variance term $κ^2 \{r_{1}r_{2}r_{3}+\sum_{k=1}^{3} p_{k} r_{k}\}$ scales with the effective number of free parameters in the estimator $\widetilde{X}$. Our analysis achieves a clean rank-adaptive bias--variance tradeoff: as we increase the ranks of estimator $\widetilde{X}$, the bias $ξ(r_{1},r_{2},r_{3})$ decreases and the variance increases. As a byproduct we also obtain a convenient bias-variance decomposition for the vanilla low-rank SVD matrix estimators.
翻译:本文研究三阶张量去噪问题,其中真实张量不一定是Tucker低秩的。具体而言,我们观测到 $$ Y=X^\ast+Z\in \mathbb{R}^{p_{1} \times p_{2} \times p_{3}}, $$ 其中 $X^\ast$ 是真实张量,$Z$ 是噪声张量。我们提出高阶张量奇异值分解估计量 $\widetilde{X}$ 的一个简单变体。我们证明:在所有用户指定的Tucker秩 $(r_{1},r_{2},r_{3})$ 上一致地成立 $$ \| \widetilde{X} - X^* \|_{ \mathrm{F}}^2 = O \Big( κ^2 \Big\{ r_{1}r_{2}r_{3}+\sum_{k=1}^{3} p_{k} r_{k} \Big\} \; + \; ξ_{(r_{1},r_{2},r_{3})}^2\Big) \quad \text{ 以高概率成立。} $$ 此处偏差项 $ξ_{(r_1,r_2,r_3)}$ 对应 $X^\ast$ 在Tucker秩为 $(r_1,r_2,r_3)$ 的张量类上的最佳可逼近误差;$κ^2$ 量化噪声水平;方差项 $κ^2 \{r_{1}r_{2}r_{3}+\sum_{k=1}^{3} p_{k} r_{k}\}$ 随估计量 $\widetilde{X}$ 的有效自由参数数量变化。我们的分析实现了清晰的秩自适应偏差-方差权衡:随着估计量 $\widetilde{X}$ 的秩增加,偏差 $ξ(r_{1},r_{2},r_{3})$ 减小而方差增大。作为副产品,我们还得到了经典低秩奇异值分解矩阵估计量的一种便捷的偏差-方差分解形式。