The poor performance of transformers on arithmetic tasks seems to stem in large part from their inability to keep track of the exact position of each digit inside of a large span of digits. We mend this problem by adding an embedding to each digit that encodes its position relative to the start of the number. In addition to the boost these embeddings provide on their own, we show that this fix enables architectural modifications such as input injection and recurrent layers to improve performance even further. With positions resolved, we can study the logical extrapolation ability of transformers. Can they solve arithmetic problems that are larger and more complex than those in their training data? We find that training on only 20 digit numbers with a single GPU for one day, we can reach state-of-the-art performance, achieving up to 99% accuracy on 100 digit addition problems. Finally, we show that these gains in numeracy also unlock improvements on other multi-step reasoning tasks including sorting and multiplication.
翻译:Transformer在算术任务上的不佳表现,很大程度上源于其难以精确追踪多位数字中每个数字的具体位置。我们通过为每个数字添加一种嵌入来解决此问题,该嵌入编码了数字相对于数值起始位置的信息。除了这些嵌入本身带来的性能提升外,我们还证明,这一修正使得模型架构的改进(如输入注入和循环层)能够进一步提升性能。在位置信息问题得到解决后,我们可以探究Transformer的逻辑外推能力:它们能否解决比训练数据规模更大、更复杂的算术问题?实验表明,仅使用单块GPU对20位数进行为期一天的训练,模型即可达到最先进的性能,在100位数加法问题上实现高达99%的准确率。最后,我们证明这种数值处理能力的提升也能促进其他多步推理任务(包括排序和乘法)的性能改进。