In recent years, more and more large data sets have become available. Data accuracy, the absence of verifiable errors in data, is crucial for these large materials to enable high-quality research, downstream applications, and model training. This results in the problem of how to curate or improve data accuracy in such large and growing data, especially when the data is too large for manual curation to be feasible. This paper presents a unified procedure for iterative and continuous improvement of data sets. We provide theoretical guarantees that data accuracy tests speed up error reduction and, most importantly, that the proposed approach will, asymptotically, eliminate all errors in data with probability one. We corroborate the theoretical results with simulations and a real-world use case.
翻译:近年来,越来越多的大型数据集变得可用。数据准确性——即数据中不存在可验证的错误——对于这些大型材料实现高质量研究、下游应用和模型训练至关重要。这引出了如何在如此庞大且不断增长的数据中进行治理或提高数据准确性的问题,特别是当数据规模过大而无法进行人工治理时。本文提出了一种用于数据集迭代式持续改进的统一流程。我们提供了理论保证,证明数据准确性测试能够加速错误减少,最重要的是,所提出的方法将渐近地以概率一消除数据中的所有错误。我们通过模拟和真实世界用例验证了理论结果。