We provide a full characterization of the concept classes that are optimistically universally online learnable with $\{0, 1\}$ labels. The notion of optimistically universal online learning was defined in [Hanneke, 2021] in order to understand learnability under minimal assumptions. In this paper, following the philosophy behind that work, we investigate two questions, namely, for every concept class: (1) What are the minimal assumptions on the data process admitting online learnability? (2) Is there a learning algorithm which succeeds under every data process satisfying the minimal assumptions? Such an algorithm is said to be optimistically universal for the given concept class. We resolve both of these questions for all concept classes, and moreover, as part of our solution, we design general learning algorithms for each case. Finally, we extend these algorithms and results to the agnostic case, showing an equivalence between the minimal assumptions on the data process for learnability in the agnostic and realizable cases, for every concept class, as well as the equivalence of optimistically universal learnability.
翻译:我们完整刻画了在$\{0, 1\}$标签下具有乐观通用在线可学习性的概念类。乐观通用在线学习的概念由[Hanneke, 2021]提出,旨在理解最小假设下的可学习性。本文遵循该工作的核心思想,针对每个概念类研究两个问题:(1) 使在线学习成为可能的数据过程所需的最小假设是什么?(2) 是否存在一种学习算法,能在所有满足最小假设的数据过程中都取得成功?这样的算法被称为给定概念类的乐观通用算法。我们为所有概念类解决了这两个问题,并且作为解决方案的一部分,我们为每种情况设计了通用的学习算法。最后,我们将这些算法和结果扩展到不可知情形,证明了对于每个概念类,不可知情形与可实现情形下数据过程所需的最小学习假设是等价的,乐观通用可学习性也是等价的。