Prior work on partial labels learning (PLL) has shown that learning is possible even when each instance is associated with a bag of labels, rather than a single accurate but costly label. However, the necessary conditions for learning with partial labels remain unclear, and existing PLL methods are effective only in specific scenarios. In this work, we mathematically characterize the settings in which PLL is feasible. In addition, we present PL A-$k$NN, an adaptive nearest-neighbors algorithm for PLL that is effective in general scenarios and enjoys strong performance guarantees. Experimental results corroborate that PL A-$k$NN can outperform state-of-the-art methods in general PLL scenarios.
翻译:关于部分标签学习(PLL)的先前工作表明,即使每个实例关联的是一个标签集合(而非单一准确但昂贵的标签),学习仍然是可能的。然而,利用部分标签进行学习的必要条件仍不明确,且现有PLL方法仅在特定场景下有效。在本工作中,我们从数学上刻画了PLL可行的设定。此外,我们提出了PL A-$k$NN——一种用于PLL的自适应最近邻算法,该算法在一般场景下有效且具备强大的性能保证。实验结果证实,PL A-$k$NN在一般PLL场景中能够超越现有最先进方法。