Gradient descent on log-sum-exp (LSE) objectives performs implicit expectation--maximization (EM): the gradient with respect to each component output equals its responsibility. The same theory predicts collapse without volume control analogous to the log-determinant in Gaussian mixture models. We instantiate the theory in a single-layer encoder with an LSE objective and InfoMax regularization for volume control. Experiments confirm the theory's predictions. The gradient--responsibility identity holds exactly; LSE alone collapses; variance prevents dead components; decorrelation prevents redundancy. The model exhibits EM-like optimization dynamics in which lower loss does not correspond to better features and adaptive optimizers offer no advantage. The resulting decoder-free model learns interpretable mixture components, confirming that implicit EM theory can prescribe architectures.
翻译:对数求和指数(LSE)目标上的梯度下降执行隐式期望最大化(EM):每个分量输出的梯度等于其责任度。同一理论预测,若无类似高斯混合模型中对数行列式的体积控制,将发生坍缩。我们在单层编码器中实例化该理论,采用LSE目标及用于体积控制的InfoMax正则化。实验证实了理论预测:梯度-责任度恒等式精确成立;仅使用LSE会导致坍缩;方差防止分量失效;去相关防止冗余。该模型展现出类EM的优化动态,其中较低损失并不对应更好特征,且自适应优化器未提供优势。所得无解码器模型可学习可解释的混合分量,证实隐式EM理论能够指导架构设计。