In this article, bipartite ranking, a statistical learning problem involved in many applications and widely studied in the passive context, is approached in a much more general \textit{active setting} than the discrete one previously considered in the literature. While the latter assumes that the conditional distribution is piece wise constant, the framework we develop permits in contrast to deal with continuous conditional distributions, provided that they fulfill a Hölder smoothness constraint. We first show that a naive approach based on discretisation at a uniform level, fixed \textit{a priori} and consisting in applying next the active strategy designed for the discrete setting generally fails. Instead, we propose a novel algorithm, referred to as smooth-rank and designed for the continuous setting, which aims to minimise the distance between the ROC curve of the estimated ranking rule and the optimal one w.r.t. the $\sup$ norm. We show that, for a fixed confidence level $ε>0$ and probability $δ\in (0,1)$, smooth-rank is PAC$(ε,δ)$. In addition, we provide a problem dependent upper bound on the expected sampling time of smooth-rank and establish a problem dependent lower bound on the expected sampling time of any PAC$(ε,δ)$ algorithm. Beyond the theoretical analysis carried out, numerical results are presented, providing solid empirical evidence of the performance of the algorithm proposed, which compares favorably with alternative approaches.
翻译:本文探讨了二分排序问题,该统计学习问题在许多应用中均有涉及,并在被动学习情境下得到了广泛研究。与文献中先前考虑的离散设定相比,本文在更为一般的*主动设定*下处理该问题。以往离散设定假设条件分布是分段常数,而本文所建立的框架则允许处理连续的条件分布,前提是这些分布满足Hölder平滑性约束。我们首先指出,基于先验固定均匀水平离散化的朴素方法——即随后应用为离散设定设计的主动策略——通常会失效。相反,我们提出了一种称为smooth-rank的新算法,专为连续设定设计,旨在最小化估计排序规则的ROC曲线与最优ROC曲线在$\sup$范数下的距离。我们证明,对于固定的置信水平$ε>0$和概率$δ\in (0,1)$,smooth-rank是PAC$(ε,δ)$的。此外,我们给出了smooth-rank期望采样时间的问题依赖上界,并建立了任何PAC$(ε,δ)$算法期望采样时间的问题依赖下界。除了进行的理论分析外,本文还展示了数值结果,为所提出算法的性能提供了坚实的实证依据,其表现优于其他替代方法。