Language models (LMs) are a central component of modern AI systems, and diffusion-based language models (DLMs) have recently emerged as a competitive alternative. Both paradigms rely on word embeddings not only to represent the input sentence, but also to represent the target sentence that backbone models are trained to predict. We argue that such static embedding of the target word is insensitive to neighboring words, encouraging locally accurate word prediction while neglecting global structure across the target sentence. To address this limitation, we propose a continuous sentence representation, termed sentence curve, defined as a spline curve whose control points affect multiple words in the sentence. Based on this representation, we introduce sentence curve language model (SCLM), which extends DLMs to predict sentence curves instead of the static word embeddings. We theoretically show that sentence curve prediction induces a regularization effect that promotes global structure modeling, and characterize how different sentence curve types affect this behavior. Empirically, SCLM achieves SOTA performance among DLMs on IWSLT14 and WMT14, shows stable training without burdensome knowledge distillation, and demonstrates promising potential compared to discrete DLMs on LM1B.
翻译:语言模型(LMs)是现代人工智能系统的核心组件,而基于扩散的语言模型(DLMs)最近已成为一种具有竞争力的替代方案。这两种范式都依赖词嵌入,不仅用于表示输入句子,也用于表示骨干模型被训练预测的目标句子。我们认为,这种对目标词的静态嵌入对相邻词不敏感,鼓励了局部准确的词预测,却忽视了目标句子中的全局结构。为了解决这一局限性,我们提出了一种连续的句子表示,称为句子曲线,其定义为一条样条曲线,其控制点影响句子中的多个词。基于此表示,我们引入了句子曲线语言模型(SCLM),它将DLMs扩展为预测句子曲线而非静态词嵌入。我们从理论上证明了句子曲线预测会产生一种促进全局结构建模的正则化效应,并刻画了不同句子曲线类型如何影响这一行为。实证上,SCLM在IWSLT14和WMT14数据集上取得了DLMs中的SOTA性能,训练稳定且无需繁重的知识蒸馏,与离散DLMs相比在LM1B上展现出有前景的潜力。