Next-token prediction (NTP) over large text corpora has become the go-to paradigm to train large language models. Yet, it remains unclear how NTP influences the mapping of linguistic patterns to geometric properties of the resulting model representations. We frame training of large language models as soft-label classification over sparse probabilistic label vectors, coupled with an analytical approximation that allows unrestricted generation of context embeddings. This approach links NTP training to rank-constrained, nuclear-norm regularized optimization in the logit domain, offering a framework for analyzing the geometry of word and context embeddings. In large embedding spaces, we find that NTP implicitly favors learning logits with a sparse plus low-rank structure. While the sparse component captures the co-occurrence frequency of context-word pairs, the orthogonal low-rank component, which becomes dominant as training progresses, depends solely on the sparsity pattern of the co-occurrence matrix. Consequently, when projected onto an appropriate subspace, representations of contexts that are followed by the same set of next-tokens collapse, a phenomenon we term subspace-collapse. We validate our findings on synthetic and small-scale real language datasets. Finally, we outline potential research directions aimed at deepening the understanding of NTP's influence on the learning of linguistic patterns and regularities.
翻译:下一词预测已成为训练大语言模型的主流范式,但该训练方式如何影响语言模式到模型表示几何性质的映射仍不明确。本文将大语言模型的训练形式化为稀疏概率标签向量上的软标签分类问题,并结合一种允许无限制生成上下文嵌入的解析近似方法。该框架将下一词预测训练与对数域上的秩约束核范数正则化优化联系起来,为分析词嵌入和上下文嵌入的几何结构提供了理论工具。在大规模嵌入空间中,我们发现下一词预测隐式倾向于学习具有“稀疏+低秩”结构的对数表示:稀疏分量捕捉上下文-词对的共现频率,而正交的低秩分量(随训练进程逐渐主导表示)仅依赖于共现矩阵的稀疏模式。因此,当投影到适当子空间时,具有相同后续词集合的上下文表示会产生坍缩,我们将此现象称为子空间坍缩。我们在合成数据集与小规模真实语言数据集上验证了理论发现。最后,本文展望了深化理解下一词预测如何影响语言模式与规律习得的潜在研究方向。