We present a methodology for using unlabeled data to design semi supervised learning (SSL) methods that improve the prediction performance of supervised learning for regression tasks. The main idea is to design different mechanisms for integrating the unlabeled data, and include in each of them a mixing parameter $\alpha$, controlling the weight given to the unlabeled data. Focusing on Generalized Linear Models (GLM) and linear interpolators classes of models, we analyze the characteristics of different mixing mechanisms, and prove that in all cases, it is invariably beneficial to integrate the unlabeled data with some nonzero mixing ratio $\alpha>0$, in terms of predictive performance. Moreover, we provide a rigorous framework to estimate the best mixing ratio $\alpha^*$ where mixed SSL delivers the best predictive performance, while using the labeled and unlabeled data on hand. The effectiveness of our methodology in delivering substantial improvement compared to the standard supervised models, in a variety of settings, is demonstrated empirically through extensive simulation, in a manner that supports the theoretical analysis. We also demonstrate the applicability of our methodology (with some intuitive modifications) to improve more complex models, such as deep neural networks, in real-world regression tasks.
翻译:本文提出一种利用未标记数据设计半监督学习(SSL)方法的方法论,旨在提升回归任务中监督学习的预测性能。核心思想是设计多种整合未标记数据的机制,并在每种机制中引入混合参数$\alpha$以控制未标记数据的权重。聚焦于广义线性模型(GLM)和线性插值器这两类模型,我们分析了不同混合机制的特性,并证明在所有情况下,以非零混合比$\alpha>0$整合未标记数据均能提升预测性能。此外,我们建立了严谨的框架来估计最佳混合比$\alpha^*$,使得混合SSL方法能在利用现有标记与未标记数据的同时达到最优预测表现。通过大量模拟实验,我们实证展示了该方法在多种场景下相比标准监督模型均能带来显著性能提升,结果与理论分析一致。我们还通过实际回归任务证明,该方法(经适当直观调整后)可应用于改进更复杂的模型(如深度神经网络)。