Decoder-only Transformers often struggle with complex reasoning tasks, particularly arithmetic reasoning requiring multiple sequential operations. In this work, we identify representation collapse in the model's intermediate layers as a key factor limiting their reasoning capabilities. To address this, we propose Sequential Variance-Covariance Regularization (Seq-VCR), which enhances the entropy of intermediate representations and prevents collapse. Combined with dummy pause tokens as substitutes for chain-of-thought (CoT) tokens, our method significantly improves performance in arithmetic reasoning problems. In the challenging $5 \times 5$ integer multiplication task, our approach achieves $99.5\%$ exact match accuracy, outperforming models of the same size (which yield $0\%$ accuracy) and GPT-4 with five-shot CoT prompting ($44\%$). We also demonstrate superior results on arithmetic expression and longest increasing subsequence (LIS) datasets. Our findings highlight the importance of preventing intermediate layer representation collapse to enhance the reasoning capabilities of Transformers and show that Seq-VCR offers an effective solution without requiring explicit CoT supervision.
翻译:仅解码器Transformer在处理复杂推理任务时常常遇到困难,特别是需要多步顺序运算的算术推理任务。本工作中,我们发现模型中间层的表征坍缩是限制其推理能力的关键因素。为解决此问题,我们提出序列方差-协方差正则化方法(Seq-VCR),该方法通过增强中间表征的熵值来防止坍缩现象。结合作为思维链(CoT)标记替代的虚拟停顿标记,我们的方法在算术推理问题上实现了显著性能提升。在具有挑战性的$5 \times 5$整数乘法任务中,我们的方法达到了$99.5\%$的精确匹配准确率,优于同规模模型(准确率为$0\%$)以及采用五次示例CoT提示的GPT-4模型(准确率为$44\%$)。我们在算术表达式和最长递增子序列(LIS)数据集上也展示了优越的结果。我们的研究结果揭示了防止中间层表征坍缩对于提升Transformer推理能力的重要性,并表明Seq-VCR提供了一种无需显式CoT监督的有效解决方案。