The solution to empirical risk minimization with $f$-divergence regularization (ERM-$f$DR) is extended to constrained optimization problems, establishing conditions for equivalence between the solution and constraints. A dual formulation of ERM-$f$DR is introduced, providing a computationally efficient method to derive the normalization function of the ERM-$f$DR solution. This dual approach leverages the Legendre-Fenchel transform and the implicit function theorem, enabling explicit characterizations of the generalization error for general algorithms under mild conditions, and another for ERM-$f$DR solutions.
翻译:本文将带有$f$-散度正则化的经验风险最小化(ERM-$f$DR)的解推广至约束优化问题,并建立了该解与约束条件之间等价的充分条件。通过引入ERM-$f$DR的对偶形式,提出了一种计算高效的方法来推导ERM-$f$DR解的归一化函数。该对偶方法结合勒让德-芬切尔变换与隐函数定理,能够在温和条件下显式刻画一般算法的泛化误差,并针对ERM-$f$DR解给出另一类刻画。