A wide range of machine learning algorithms iteratively add data to the training sample. Examples include semi-supervised learning, active learning, multi-armed bandits, and Bayesian optimization. We embed this kind of data addition into decision theory by framing data selection as a decision problem. This paves the way for finding Bayes-optimal selections of data. For the illustrative case of self-training in semi-supervised learning, we derive the respective Bayes criterion. We further show that deploying this criterion mitigates the issue of confirmation bias by empirically assessing our method for generalized linear models, semi-parametric generalized additive models, and Bayesian neural networks on simulated and real-world data.
翻译:众多机器学习算法会迭代地向训练样本中添加数据,例如半监督学习、主动学习、多臂老虎机与贝叶斯优化。我们将此类数据添加过程嵌入决策理论框架,把数据选择形式化为一个决策问题,从而为寻找贝叶斯最优数据选择提供了路径。针对半监督学习中自训练的典型案例,我们推导了相应的贝叶斯准则。通过在线性模型、半参数广义可加模型与贝叶斯神经网络上对模拟数据与真实数据进行实证评估,我们进一步证明采用该准则能有效缓解确认偏误问题。