The mixture of experts (MoE) model is a versatile framework for predictive modeling that has gained renewed interest in the age of large language models. A collection of predictive ``experts'' is learned along with a ``gating function'' that controls how much influence each expert is given when a prediction is made. This structure allows relatively simple models to excel in complex, heterogeneous data settings. In many contemporary settings, unlabeled data are widely available while labeled data are difficult to obtain. Semi-supervised learning methods seek to leverage the unlabeled data. We propose a novel method for semi-supervised learning of MoE models. We start from a semi-supervised MoE model that was developed by oceanographers that makes the strong assumption that the latent clustering structure in unlabeled data maps directly to the influence that the gating function should give each expert in the supervised task. We relax this assumption, imagining a noisy connection between the two, and propose an algorithm based on least trimmed squares, which succeeds even in the presence of misaligned data. Our theoretical analysis characterizes the conditions under which our approach yields estimators with a near-parametric rate of convergence. Simulated and real data examples demonstrate the method's efficacy.
翻译:混合专家(MoE)模型是一种通用的预测建模框架,在大语言模型时代重新受到关注。该模型通过学习一组预测“专家”和一个“门控函数”来实现预测,其中门控函数控制每个专家在预测时的影响力。这种结构使得相对简单的模型能够在复杂、异构的数据环境中表现出色。在许多当代场景中,未标记数据广泛可用,而标记数据却难以获取。半监督学习方法试图利用未标记数据。我们提出了一种用于MoE模型半监督学习的新方法。我们从海洋学家开发的半监督MoE模型出发,该模型强假设未标记数据中的潜在聚类结构直接映射到门控函数在监督任务中应赋予每个专家的影响力。我们放宽了这一假设,设想两者之间存在噪声关联,并提出了一种基于最小截断平方的算法,即使在数据未对齐的情况下也能成功。我们的理论分析刻画了该方法产生具有近参数收敛速率估计量的条件。仿真和真实数据示例证明了该方法的有效性。