We study online learning in the adversarial injection model introduced by [Goel et al. 2017], where a stream of labeled examples is predominantly drawn i.i.d.\ from an unknown distribution $\mathcal{D}$, but may be interspersed with adversarially chosen instances without the learner knowing which rounds are adversarial. Crucially, labels are always consistent with a fixed target concept (the clean-label setting). The learner is additionally allowed to abstain from predicting, and the total error counts the mistakes whenever the learner decides to predict and incorrect abstentions when it abstains on i.i.d.\ rounds. Perhaps surprisingly, prior work shows that oracle access to the underlying distribution yields $O(d^2 \log T)$ combined error for VC dimension $d$, while distribution-agnostic algorithms achieve only $\tilde{O}(\sqrt{T})$ for restricted classes, leaving open whether this gap is fundamental. We resolve this question by proving a matching $Ω(\sqrt{T})$ lower bound for VC dimension $1$, establishing a sharp separation between the two information regimes. On the algorithmic side, we introduce a potential-based framework driven by \emph{robust witnesses}, small subsets of labeled examples that certify predictions while remaining resilient to adversarial contamination. We instantiate this framework using two combinatorial dimensions: (1) \emph{inference dimension}, yielding combined error $\tilde{O}(T^{1-1/k})$ for classes of inference dimension $k$, and (2) \emph{certificate dimension}, a new relaxation we introduce. As an application, we show that halfspaces in $\mathbb{R}^2$ have certificate dimension $3$, obtaining the first distribution-agnostic bound of $\tilde{O}(T^{2/3})$ for this class. This is notable since [Blum et al. 2021] showed halfspaces are not robustly learnable under clean-label attacks without abstention.
翻译:我们研究由[Goel等人,2017]提出的对抗注入模型中的在线学习问题。在该模型中,标注样本流主要从未知分布$\mathcal{D}$中独立同分布抽取,但可能混杂着对抗性选择的实例,且学习者无法知晓哪些轮次是对抗性的。关键的是,标签始终与一个固定的目标概念保持一致(即干净标签设定)。学习者被允许额外选择弃权预测,总误差统计学习者决定预测时的错误预测次数,以及在独立同分布轮次上错误弃权的次数。可能令人惊讶的是,先前研究表明,对于VC维$d$,若可获得底层分布的预言机访问,则能达到$O(d^2 \log T)$的组合误差;而对于受限类别,不依赖分布信息的算法仅能达到$\tilde{O}(\sqrt{T})$的误差,这留下了该差距是否本质性的开放问题。我们通过证明VC维为1时存在匹配的$Ω(\sqrt{T})$下界,解决了这一问题,从而在两种信息机制之间建立了明确的分离。在算法方面,我们引入了一个基于势能的框架,该框架由**鲁棒见证集**驱动——即一小部分标注样本的子集,它们能在保持对对抗性污染鲁棒性的同时,为预测提供认证。我们使用两种组合维度实例化了该框架:(1) **推断维度**,对于推断维度为$k$的类别,可获得$\tilde{O}(T^{1-1/k})$的组合误差;(2) **认证维度**,这是我们引入的一种新的松弛概念。作为一个应用,我们证明了$\mathbb{R}^2$中的半空间具有认证维度3,从而为该类别首次获得了不依赖分布信息的$\tilde{O}(T^{2/3})$误差上界。这一点值得注意,因为[Blum等人,2021]已证明在没有弃权机制的情况下,半空间在干净标签攻击下不具备鲁棒可学习性。