Multi-distribution learning (MDL), which seeks to learn a shared model that minimizes the worst-case risk across $k$ distinct data distributions, has emerged as a unified framework in response to the evolving demand for robustness, fairness, multi-group collaboration, etc. Achieving data-efficient MDL necessitates adaptive sampling, also called on-demand sampling, throughout the learning process. However, there exist substantial gaps between the state-of-the-art upper and lower bounds on the optimal sample complexity. Focusing on a hypothesis class of Vapnik-Chervonenkis (VC) dimension d, we propose a novel algorithm that yields an varepsilon-optimal randomized hypothesis with a sample complexity on the order of (d+k)/varepsilon^2 (modulo some logarithmic factor), matching the best-known lower bound. Our algorithmic ideas and theory are further extended to accommodate Rademacher classes. The proposed algorithms are oracle-efficient, which access the hypothesis class solely through an empirical risk minimization oracle. Additionally, we establish the necessity of randomization, revealing a large sample size barrier when only deterministic hypotheses are permitted. These findings resolve three open problems presented in COLT 2023 (i.e., citet[Problems 1, 3 and 4]{awasthi2023sample}).
翻译:多分布学习(MDL)旨在学习一个共享模型,以最小化在$k$个不同数据分布上的最坏情况风险,该框架已统一回应了鲁棒性、公平性、多组协作等不断演进的需求。实现数据高效的MDL需要在整个学习过程中进行自适应采样(亦称按需采样)。然而,关于最优样本复杂度的现有最紧上界与下界之间存在显著差距。针对Vapnik-Chervonenkis(VC)维数为$d$的假设类,我们提出一种新型算法,该算法能以约$(d+k)/\varepsilon^2$量级(模对数因子)的样本复杂度获得$\varepsilon$-最优随机化假设,匹配已知最佳下界。我们的算法思想与理论进一步扩展至Rademacher类。所提算法面向预言机高效,仅通过经验风险最小化预言机访问假设类。此外,我们证明了随机化的必要性,揭示了仅允许确定性假设时将面临的大样本量障碍。这些发现解决了COLT 2023中提出的三个开放问题(即\cite[问题1、3和4]{awasthi2023sample})。