In this paper we will give a characterization of the learnability of forgiving 0-1 loss functions in the multiclass setting with effectively finite cardinality of the output and label space. To do this, we create a new combinatorial dimension that is based off of the Natarajan Dimension and we show that a hypothesis class is learnable in our setting if and only if this Generalized Natarajan Dimension is finite. We also show how this dimension characterizes other known learning settings such as a vast amount of instantiations of learning with set-valued feedback and a modified version of list learning.
翻译:本文将在输出空间与标签空间基数实际有限的设定下,给出多类别场景中宽容0-1损失函数可学习性的完整刻画。为此,我们基于Natarajan维构建了一种新的组合维度,并证明假设类在我们设定的场景中可学习当且仅当该广义Natarajan维是有限的。我们进一步展示了该维度如何刻画其他已知的学习场景,包括大量带集值反馈的学习实例化形式以及改进版本的列表学习。