Recent research has introduced a key notion of $H$-consistency bounds for surrogate losses. These bounds offer finite-sample guarantees, quantifying the relationship between the zero-one estimation error (or other target loss) and the surrogate loss estimation error for a specific hypothesis set. However, previous bounds were derived under the condition that a lower bound of the surrogate loss conditional regret is given as a convex function of the target conditional regret, without non-constant factors depending on the predictor or input instance. Can we derive finer and more favorable $H$-consistency bounds? In this work, we relax this condition and present a general framework for establishing enhanced $H$-consistency bounds based on more general inequalities relating conditional regrets. Our theorems not only subsume existing results as special cases but also enable the derivation of more favorable bounds in various scenarios. These include standard multi-class classification, binary and multi-class classification under Tsybakov noise conditions, and bipartite ranking.
翻译:近期研究引入了代理损失的$H$-一致性界这一关键概念。这些界提供了有限样本保证,量化了针对特定假设集的零一估计误差(或其他目标损失)与代理损失估计误差之间的关系。然而,先前推导的界均基于以下条件:代理损失条件遗憾的下界被给定为目标条件遗憾的凸函数,且不包含依赖于预测器或输入实例的非常数因子。我们能否推导出更精细且更有利的$H$-一致性界?在本工作中,我们放宽了这一条件,提出了一个基于关联条件遗憾的更一般不等式来建立增强$H$-一致性界的通用框架。我们的定理不仅将现有结果作为特例包含在内,还能在多种场景下推导出更有利的界。这些场景包括标准多类分类、Tsybakov噪声条件下的二分类与多类分类,以及二分排序。