We consider the problem of evaluating black-box multi-class classifiers. In the standard setup, we observe class labels $Y\in \{0,1,\ldots,M-1\}$ generated according to the conditional distribution $ Y|X \sim \text{ Multinom}\big(η(X)\big), $ where $X$ denotes the features and $η$ maps from the feature space to the $(M-1)$-dimensional simplex. A black-box classifier is an estimate $\hatη$ for which we make no assumptions about the training algorithm. Given holdout data, our goal is to evaluate the performance of the classifier $\hatη$. Recent work suggests treating this as a goodness-of-fit problem by testing the hypothesis $H_0: ρ((X,Y),(X',Y')) \le δ$, where $ρ$ is some metric between two distributions, and $(X',Y')\sim P_X\times \text{ Multinom}(\hatη(X))$. Combining ideas from algorithmic fairness, Neyman-Pearson lemma, and conformal p-values, we propose a new methodology for this testing problem. The key idea is to generate a second sample $(X',Y') \sim P_X \times \text{ Multinom}\big(\hatη(X)\big)$ allowing us to reduce the task to two-sample conditional distribution testing. Using part of the data, we train an auxiliary binary classifier called a distinguisher to attempt to distinguish between the two samples. The distinguisher's ability to differentiate samples, measured using a rank-sum statistic, is then used to assess the difference between $\hatη$ and $η$ . Using techniques from cross-validation central limit theorems, we derive an asymptotically rigorous test under suitable stability conditions of the distinguisher.
翻译:暂无翻译