We study the problem of learning robust classifiers where the classifier will receive a perturbed input. Unlike robust PAC learning studied in prior work, here the clean data and its label are also adversarially chosen. We formulate this setting as an online learning problem and consider both the realizable and agnostic learnability of hypothesis classes. We define a new dimension of classes and show it controls the mistake bounds in the realizable setting and the regret bounds in the agnostic setting. In contrast to the dimension that characterizes learnability in the PAC setting, our dimension is rather simple and resembles the Littlestone dimension. We generalize our dimension to multiclass hypothesis classes and prove similar results in the realizable case. Finally, we study the case where the learner does not know the set of allowed perturbations for each point and only has some prior on them.
翻译:本文研究鲁棒分类器的学习问题,其中分类器将接收经过扰动的输入。与先前工作中研究的鲁棒PAC学习不同,此处干净数据及其标签也是对抗性选择的。我们将此设定形式化为在线学习问题,并考虑假设类的可实现性与不可知性学习。我们定义了一个新的类别维度,并证明该维度控制着可实现设定中的错误界限以及不可知设定中的遗憾界限。与表征PAC学习中可学习性的维度不同,我们的维度较为简单,类似于Littlestone维度。我们将该维度推广至多类别假设类,并在可实现情况下证明了类似结果。最后,我们研究了学习者不知道每个点允许的扰动集合、仅对其具有先验知识的情形。