Adversarial training can be used to learn models that are robust against perturbations. For linear models, it can be formulated as a convex optimization problem. Compared to methods proposed in the context of deep learning, leveraging the optimization structure allows significantly faster convergence rates. Still, the use of generic convex solvers can be inefficient for large-scale problems. Here, we propose tailored optimization algorithms for the adversarial training of linear models, which render large-scale regression and classification problems more tractable. For regression problems, we propose a family of solvers based on iterative ridge regression and, for classification, a family of solvers based on projected gradient descent. The methods are based on extended variable reformulations of the original problem. We illustrate their efficiency in numerical examples.
翻译:对抗训练可用于学习对扰动具有鲁棒性的模型。对于线性模型,该问题可表述为凸优化问题。与深度学习背景下提出的方法相比,利用其优化结构可实现显著更快的收敛速度。然而,对于大规模问题,使用通用凸优化求解器仍可能效率低下。本文针对线性模型的对抗训练提出了定制化的优化算法,使大规模回归与分类问题更易处理。对于回归问题,我们提出了一类基于迭代岭回归的求解器;对于分类问题,则提出了一类基于投影梯度下降的求解器。这些方法基于原始问题的扩展变量重构。我们通过数值算例验证了其高效性。