This paper presents a conformal prediction method for classification in highly imbalanced and open-set settings, where there are many possible classes and not all may be represented in the data. Existing approaches require a finite, known label space and typically involve random sample splitting, which works well when there is a sufficient number of observations from each class. Consequently, they have two limitations: (i) they fail to provide adequate coverage when encountering new labels at test time, and (ii) they may become overly conservative when predicting previously seen labels. To obtain valid prediction sets in the presence of unseen labels, we compute and integrate into our predictions a new family of conformal p-values that can test whether a new data point belongs to a previously unseen class. We study these p-values theoretically, establishing their optimality, and uncover an intriguing connection with the classical Good--Turing estimator for the probability of observing a new species. To make more efficient use of imbalanced data, we also develop a selective sample splitting algorithm that partitions training and calibration data based on label frequency, leading to more informative predictions. Despite breaking exchangeability, this allows maintaining finite-sample guarantees through suitable re-weighting. With both simulated and real data, we demonstrate our method leads to prediction sets with valid coverage even in challenging open-set scenarios with infinite numbers of possible labels, and produces more informative predictions under extreme class imbalance.
翻译:本文提出了一种适用于高度不平衡和开放集场景下的分类保形预测方法,其中存在大量可能的类别,且并非所有类别都必然在数据中有所体现。现有方法要求标签空间有限且已知,并通常涉及随机样本划分,这在每个类别都有足够观测样本时效果良好。因此,这些方法存在两个局限性:(i) 在测试阶段遇到新标签时无法提供足够的覆盖度;(ii) 在预测已见标签时可能变得过于保守。为了在存在未见标签的情况下获得有效的预测集,我们计算并整合了一族新的保形p值到预测中,该p值可以检验新数据点是否属于先前未见的类别。我们从理论上研究了这些p值,证明了其最优性,并揭示了其与经典Good–Turing估计器在观测到新物种概率问题上的有趣联系。为了更有效地利用不平衡数据,我们还开发了一种选择性样本划分算法,该算法基于标签频率划分训练数据和校准数据,从而获得信息量更丰富的预测。尽管打破了可交换性,但通过适当的重加权,该方法仍能保持有限样本的统计保证。通过模拟数据和真实数据的实验,我们证明了即使在具有无限可能标签的挑战性开放集场景下,我们的方法也能产生具有有效覆盖度的预测集,并在极端类别不平衡条件下生成信息量更丰富的预测。