Corruption is notoriously widespread in data collection. Despite extensive research, the existing literature on corruption predominantly focuses on specific settings and learning scenarios, lacking a unified view. There is still a limited understanding of how to effectively model and mitigate corruption in machine learning problems. In this work, we develop a general theory of corruption from an information-theoretic perspective - with Markov kernels as a foundational mathematical tool. We generalize the definition of corruption beyond the concept of distributional shift: corruption includes all modifications of a learning problem, including changes in model class and loss function. We will focus here on changes in probability distributions. First, we construct a provably exhaustive framework for pairwise Markovian corruptions. The framework not only allows us to study corruption types based on their input space, but also serves to unify prior works on specific corruption models and establish a consistent nomenclature. Second, we systematically analyze the consequences of corruption on learning tasks by comparing Bayes risks in the clean and corrupted scenarios. This examination sheds light on complexities arising from joint and dependent corruptions on both labels and attributes. Notably, while label corruptions affect only the loss function, more intricate cases involving attribute corruptions extend the influence beyond the loss to affect the hypothesis class. Third, building upon these results, we investigate mitigations for various corruption types. We expand the existing loss-correction results for label corruption, and identify the necessity to generalize the classical corruption-corrected learning framework to a new paradigm with weaker requirements. Within the latter setting, we provide a negative result for loss correction in the attribute and the joint corruption case.
翻译:污染在数据收集中普遍存在。尽管已有大量研究,但现有文献主要关注特定场景下的学习范式,缺乏统一视角。目前对机器学习问题中如何有效建模和缓解污染的理解仍十分有限。本研究从信息论视角出发,以马尔可夫核作为基础数学工具,构建了污染的一般理论。我们将污染的定义扩展到分布偏移概念之外:污染包含学习问题的所有变更,包括模型类别和损失函数的变化。本文主要关注概率分布的变化。首先,我们构建了一个可证明完备的成对马尔可夫污染框架。该框架不仅允许我们基于输入空间研究污染类型,还能统一先前关于特定污染模型的研究,并建立一致的命名体系。其次,通过比较干净场景与污染场景下的贝叶斯风险,我们系统分析了污染对学习任务的影响。这一探究揭示了标签与属性联合依赖污染带来的复杂性。值得注意的是,标签污染仅影响损失函数,而涉及属性污染的复杂情况则将影响范围从损失函数扩展到假设类别。最后,基于这些发现,我们研究了各类污染的缓解策略。我们扩展了现有标签污染的损失校正结果,并指出现有污染校正学习框架必须推广到需求更弱的新范式的必要性。在此新设定下,我们证明了属性污染及联合污染情况下损失校正的负面结论。