Modern neural translation models based on the Transformer architecture are known for their high performance, particularly when trained on high-resource datasets. A standard next-token prediction training strategy, while widely adopted in practice, may lead to overlooked artifacts such as representation collapse. Previous works have shown that this problem is especially pronounced in the representation of the deeper Transformer layers, where it often fails to efficiently utilize the geometric space. Representation collapse is even more evident in end-to-end training of continuous-output neural machine translation, where the trivial solution would be to set all vectors to the same value. In this work, we analyze the dynamics of representation collapse at different levels of discrete and continuous NMT transformers throughout training. We incorporate an existing regularization method based on angular dispersion and demonstrate empirically that it not only mitigates collapse but also improves translation quality. Furthermore, we show that quantized models exhibit similar collapse behavior and that the benefits of regularization are preserved even after quantization.
翻译:基于Transformer架构的现代神经翻译模型以其高性能而著称,尤其是在高资源数据集上训练时。标准的下一词预测训练策略虽然在实践中被广泛采用,却可能导致表征坍缩等被忽视的伪影。先前的研究表明,该问题在Transformer深层表征中尤为显著,这些层往往无法有效利用几何空间。在连续输出神经机器翻译的端到端训练中,表征坍缩现象更为明显,此时最直接的解是将所有向量设为相同值。本研究分析了离散与连续NMT Transformer在不同训练阶段各层次表征坍缩的动态过程。我们引入了一种基于角分散的现有正则化方法,并通过实验证明该方法不仅能缓解坍缩现象,还能提升翻译质量。此外,我们发现量化模型表现出类似的坍缩行为,且正则化的优势在量化后依然得以保持。