In many learning tasks, certain requirements on the processing of individual data samples should arguably be formalized as strict constraints in the underlying optimization problem, rather than by means of arbitrary penalties. We show that, in these scenarios, learning can be carried out exploiting a sequential penalty method that allows to properly deal with constraints. The proposed algorithm is shown to possess convergence guarantees under assumptions that are reasonable in deep learning scenarios. Moreover, the results of experiments on image processing tasks show that the method is indeed viable to be used in practice.
翻译:在许多学习任务中,对单个数据样本处理的特定要求理应被形式化为底层优化问题中的严格约束,而非通过任意惩罚项来实现。本文证明,在此类场景下,可通过利用序列惩罚方法进行学习,该方法能够妥善处理约束条件。所提算法被证明在深度学习场景中合理的假设下具有收敛性保证。此外,在图像处理任务上的实验结果表明,该方法确实具备实际应用的可行性。