Partial Label Learning (PLL) grapples with learning from ambiguously labelled data, and it has been successfully applied in fields such as image recognition. Nevertheless, traditional PLL methods rely on the closed-world assumption, which can be limiting in open-world scenarios and negatively impact model performance and generalization. To tackle these challenges, our study introduces a novel method called PLL-OOD, which is the first to incorporate Out-of-Distribution (OOD) detection into the PLL framework. PLL-OOD significantly enhances model adaptability and accuracy by merging self-supervised learning with partial label loss and pioneering the Partial-Energy (PE) score for OOD detection. This approach improves data feature representation and effectively disambiguates candidate labels, using a dynamic label confidence matrix to refine predictions. The PE score, adjusted by label confidence, precisely identifies OOD instances, optimizing model training towards in-distribution data. This innovative method markedly boosts PLL model robustness and performance in open-world settings. To validate our approach, we conducted a comprehensive comparative experiment combining the existing state-of-the-art PLL model with multiple OOD scores on the CIFAR-10 and CIFAR-100 datasets with various OOD datasets. The results demonstrate that the proposed PLL-OOD framework is highly effective and effectiveness outperforms existing models, showcasing its superiority and effectiveness.
翻译:部分标记学习(PLL)致力于从模糊标记的数据中进行学习,并已成功应用于图像识别等领域。然而,传统的PLL方法依赖于封闭世界假设,这在开放世界场景中可能具有局限性,并对模型性能和泛化能力产生负面影响。为应对这些挑战,本研究提出了一种名为PLL-OOD的新方法,首次将分布外(OOD)检测纳入PLL框架。PLL-OOD通过将自监督学习与部分标记损失相结合,并首创用于OOD检测的部分能量(PE)评分,显著提升了模型的适应性和准确性。该方法改进了数据特征表示,并有效消除候选标签的歧义,利用动态标签置信度矩阵优化预测。通过标签置信度调整的PE评分能够精确识别OOD实例,从而优化模型对分布内数据的训练。这一创新方法显著增强了开放世界环境下PLL模型的鲁棒性和性能。为验证所提方法,我们在CIFAR-10和CIFAR-100数据集上结合多种OOD数据集,将现有最先进的PLL模型与多种OOD评分进行了全面对比实验。结果表明,所提出的PLL-OOD框架具有高效性,其性能优于现有模型,展现了其优越性和有效性。