Misleading or unnecessary data can have out-sized impacts on the health or accuracy of Machine Learning (ML) models. We present a Bayesian sequential selection method, akin to Bayesian experimental design, that identifies critically important information within a dataset, while ignoring data that is either misleading or brings unnecessary complexity to the surrogate model of choice. Our method improves sample-wise error convergence and eliminates instances where more data leads to worse performance and instabilities of the surrogate model, often termed sample-wise ``double descent''. We find these instabilities are a result of the complexity of the underlying map and linked to extreme events and heavy tails. Our approach has two key features. First, the selection algorithm dynamically couples the chosen model and data. Data is chosen based on its merits towards improving the selected model, rather than being compared strictly against other data. Second, a natural convergence of the method removes the need for dividing the data into training, testing, and validation sets. Instead, the selection metric inherently assesses testing and validation error through global statistics of the model. This ensures that key information is never wasted in testing or validation. The method is applied using both Gaussian process regression and deep neural network surrogate models.
翻译:误导性或非必要数据可能对机器学习(ML)模型的健康度或准确性产生超乎寻常的影响。本文提出一种贝叶斯序贯选择方法,其类似于贝叶斯实验设计,能够识别数据集中的关键重要信息,同时忽略那些具有误导性或给所选代理模型带来不必要复杂度的数据。我们的方法改善了样本误差收敛性,并消除了因数据增加导致代理模型性能下降和不稳定的情况——这类现象常被称为样本层面的“双重下降”。我们发现这些不稳定性源于底层映射的复杂性,并与极端事件及重尾分布相关联。本方法具有两个关键特征:首先,选择算法动态耦合所选模型与数据,数据的选择依据其对改进选定模型的贡献度,而非严格与其他数据进行比较;其次,该方法具有自然收敛特性,无需将数据划分为训练集、测试集和验证集,其选择指标通过模型的全局统计量内在评估测试与验证误差,从而确保关键信息不会在测试或验证过程中被浪费。该方法通过高斯过程回归与深度神经网络代理模型进行了应用验证。