We study a sequential prediction problem in which an adversary is allowed to inject arbitrarily many adversarial instances in a stream of i.i.d.\ instances, but at each round, the learner may also \emph{abstain} from making a prediction without incurring any penalty if the instance was indeed corrupted. This semi-adversarial setting naturally sits between the classical stochastic case with i.i.d.\ instances for which function classes with finite VC dimension are learnable; and the adversarial case with arbitrary instances, known to be significantly more restrictive. For this problem, Goel et al. (2023) showed that, if the learner knows the distribution $μ$ of clean samples in advance, learning can be achieved for all VC classes without restrictions on adversary corruptions. This is, however, a strong assumption in both theory and practice: a natural question is whether similar learning guarantees can be achieved without prior distributional knowledge, as is standard in classical learning frameworks (e.g., PAC learning or asymptotic consistency) and other non-i.i.d.\ models (e.g., smoothed online learning). We therefore focus on the distribution-free setting where $μ$ is \emph{unknown} and propose an algorithm \textsc{AbstainBoost} based on a boosting procedure of weak learners, which guarantees sublinear error for general VC classes in \emph{distribution-free} abstention learning for oblivious adversaries. These algorithms also enjoy similar guarantees for adaptive adversaries, for structured function classes including linear classifiers. These results are complemented with corresponding lower bounds, which reveal an interesting polynomial trade-off between misclassification error and number of erroneous abstentions.
翻译:本文研究一个序贯预测问题,其中对手可在独立同分布实例流中注入任意数量的对抗性实例,但在每一轮中,若实例确实被污染,学习者也可选择弃权而不产生任何惩罚。这种半对抗性设定自然地介于以下两种情况之间:一是具有独立同分布实例的经典随机情形,此时有限VC维函数类是可学习的;二是具有任意实例的对抗性情形,已知其限制性显著更强。针对该问题,Goel等人(2023)证明,若学习者能预先获知干净样本的分布$μ$,则所有VC类都可在不受对手污染限制的情况下实现学习。然而,这在理论和实践中都是强假设:一个自然的问题是,能否在缺乏先验分布知识的情况下实现类似的学习保证,正如经典学习框架(如PAC学习或渐近一致性)及其他非独立同分布模型(如平滑在线学习)中的标准设定。因此,我们聚焦于分布$μ$未知的分布无关设定,提出一种基于弱学习器提升过程的算法AbstainBoost,该算法可为遗忘型对手下的分布无关弃权学习中的一般VC类保证次线性误差。对于自适应对手,该算法对包括线性分类器在内的结构化函数类也具有类似保证。这些结果辅以相应的下界分析,揭示了误分类错误与错误弃权次数之间有趣的多项式权衡关系。