We study online learning in the random-order model, where the multiset of loss functions is chosen adversarially but revealed in a uniformly random order. Building on the batch-to-online conversion by Dong and Yoshida (2023), we show that if an offline algorithm admits a $(1+\varepsilon)$-approximation guarantee and the effect of $\varepsilon$ on its average sensitivity is characterized by a function $\varphi(\varepsilon)$, then an adaptive choice of $\varepsilon$ yields a small-loss regret bound of $\tilde O(\varphi^{\star}(\mathrm{OPT}_T))$, where $\varphi^{\star}$ is the concave conjugate of $\varphi$, $\mathrm{OPT}_T$ is the offline optimum over $T$ rounds, and $\tilde O$ hides polylogarithmic factors in $T$. Our method requires no regularity assumptions on loss functions, such as smoothness, and can be viewed as a generalization of the AdaGrad-style tuning applied to the approximation parameter $\varepsilon$. Our result recovers and strengthens the $(1+\varepsilon)$-approximate regret bounds of Dong and Yoshida (2023) and yields small-loss regret bounds for online $k$-means clustering, low-rank approximation, and regression. We further apply our framework to online submodular function minimization using $(1\pm\varepsilon)$-cut sparsifiers of submodular hypergraphs, obtaining a small-loss regret bound of $\tilde O(n^{3/4}(1 + \mathrm{OPT}_T^{3/4}))$, where $n$ is the ground-set size. Our approach sheds light on the power of sparsification and related techniques in establishing small-loss regret bounds in the random-order model.
翻译:我们研究随机顺序模型下的在线学习问题,其中损失函数的多重集合由对抗性方式选定,但以均匀随机顺序揭示。基于Dong和Yoshida(2023)提出的批量到在线转换方法,我们证明:若某个离线算法具有$(1+\varepsilon)$-近似保证,且$\varepsilon$对其平均敏感度的影响由函数$\varphi(\varepsilon)$刻画,则通过自适应选择$\varepsilon$可得到$\tilde O(\varphi^{\star}(\mathrm{OPT}_T))$的小损失遗憾界,其中$\varphi^{\star}$是$\varphi$的凹共轭函数,$\mathrm{OPT}_T$表示$T$轮中的离线最优值,$\tilde O$隐藏了$T$的多对数因子。我们的方法无需对损失函数施加光滑性等正则性假设,可视为AdaGrad式调参技术在近似参数$\varepsilon$上的推广。我们的结果恢复并强化了Dong和Yoshida(2023)的$(1+\varepsilon)$-近似遗憾界,并为在线$k$-均值聚类、低秩近似和回归问题导出了小损失遗憾界。我们进一步将框架应用于在线子模函数最小化问题,利用子模超图的$(1\pm\varepsilon)$-割稀疏化器,得到$\tilde O(n^{3/4}(1 + \mathrm{OPT}_T^{3/4}))$的小损失遗憾界,其中$n$为基集大小。我们的研究揭示了稀疏化及相关技术在随机顺序模型中建立小损失遗憾界的有效性。