A wide range of machine learning algorithms iteratively add data to the training sample. Examples include semi-supervised learning, active learning, multi-armed bandits, and Bayesian optimization. We embed this kind of data addition into decision theory by framing data selection as a decision problem. This paves the way for finding Bayes-optimal selections of data. For the illustrative case of self-training in semi-supervised learning, we derive the respective Bayes criterion. We further show that deploying this criterion mitigates the issue of confirmation bias by empirically assessing our method for generalized linear models, semi-parametric generalized additive models, and Bayesian neural networks on simulated and real-world data.
翻译:众多机器学习算法会迭代地向训练样本中添加数据,例如半监督学习、主动学习、多臂老虎机和贝叶斯优化。我们通过将数据选择构建为一个决策问题,将此类数据添加过程嵌入到决策理论框架中。这为寻找贝叶斯最优的数据选择铺平了道路。针对半监督学习中自训练的示例性场景,我们推导了相应的贝叶斯准则。进一步地,通过在模拟数据和真实数据上对广义线性模型、半参数广义可加模型以及贝叶斯神经网络进行实证评估,我们证明了采用该准则能够有效缓解确认偏误问题。