Kernel ridge regression, KRR, is a generalization of linear ridge regression that is non-linear in the data, but linear in the parameters. Here, we introduce an equivalent formulation of the objective function of KRR, opening up both for using penalties other than the ridge penalty and for studying kernel ridge regression from the perspective of gradient descent. Using a continuous-time perspective, we derive a closed-form solution for solving kernel regression with gradient descent, something we refer to as kernel gradient flow, KGF, and theoretically bound the differences between KRR and KGF, where, for the latter, regularization is obtained through early stopping. We also generalize KRR by replacing the ridge penalty with the $\ell_1$ and $\ell_\infty$ penalties, respectively, and use the fact that analogous to the similarities between KGF and KRR, $\ell_1$ regularization and forward stagewise regression (also known as coordinate descent), and $\ell_\infty$ regularization and sign gradient descent, follow similar solution paths. We can thus alleviate the need for computationally heavy algorithms based on proximal gradient descent. We show theoretically and empirically how the $\ell_1$ and $\ell_\infty$ penalties, and the corresponding gradient-based optimization algorithms, produce sparse and robust kernel regression solutions, respectively.
翻译:核岭回归(KRR)是线性岭回归的一种推广形式,它在数据层面呈现非线性特征,但在参数层面保持线性。本文提出了一种KRR目标函数的等价表述形式,既为非岭惩罚项的引入提供可能,也为从梯度下降视角研究核岭回归开辟新途径。借助连续时间框架,我们推导出使用梯度下降求解核回归的闭式解(称为核梯度流KGF),并从理论上界定了KRR与KGF的差异——后者通过早停法实现正则化。进一步,我们将核岭回归进行推广,分别用$\ell_1$和$\ell_\infty$惩罚项替代岭惩罚,并利用如下事实:与KGF和KRR之间的相似性类似,$\ell_1$正则化与前向分段回归(又称坐标下降法),以及$\ell_\infty$正则化与符号梯度下降法,具有相似的求解路径。因此,我们能够避免使用基于近端梯度下降的高计算复杂度算法。理论分析与实验结果表明:$\ell_1$和$\ell_\infty$惩罚项及其对应的梯度优化方法,分别能有效产生稀疏核回归解与稳健核回归解。