Mixture-of-Experts (MoE) architectures combine specialized predictors through a learned gate and are effective across regression and classification, but for classification with softmax multinomial-logistic gating, rigorous guarantees for stable maximum-likelihood training and principled model selection remain limited. We address both issues in the full-data (batch) regime. First, we derive a batch minorization-maximization (MM) algorithm for softmax-gated multinomial-logistic MoE using an explicit quadratic minorizer, yielding coordinate-wise closed-form updates that guarantee monotone ascent of the objective and global convergence to a stationary point (in the standard MM sense), avoiding approximate M-steps common in EM-type implementations. Second, we prove finite-sample rates for conditional density estimation and parameter recovery, and we adapt dendrograms of mixing measures to the classification setting to obtain a sweep-free selector of the number of experts that achieves near-parametric optimal rates after merging redundant fitted atoms. Experiments on biological protein--protein interaction prediction validate the full pipeline, delivering improved accuracy and better-calibrated probabilities than strong statistical and machine-learning baselines.
翻译:专家混合(MoE)架构通过学习的门控机制结合了专门的预测器,在回归和分类任务中均表现出色。然而,对于采用softmax多项式逻辑门控的分类任务,其最大似然训练的稳定性保证以及原则性的模型选择方法仍较为有限。本文在完整数据(批处理)框架下同时解决了这两个问题。首先,我们推导了一种用于softmax门控多项式逻辑MoE的批处理最小化-最大化(MM)算法,该算法采用显式二次下界函数,从而产生坐标方向的闭式更新。这保证了目标函数的单调上升以及全局收敛至稳定点(符合标准MM意义),避免了EM类实现中常见的近似M步。其次,我们证明了条件密度估计与参数恢复的有限样本收敛速率,并将混合测度的树状图方法适配到分类场景中,从而获得无需参数扫描的专家数量选择器。该方法在合并冗余拟合原子后,能够达到接近参数最优的收敛速率。在生物蛋白质-蛋白质相互作用预测实验中对完整流程进行了验证,相较于强大的统计学与机器学习基线方法,该方案实现了更高的预测精度与更优的概率校准效果。