Large pretrained self-attention neural networks, or transformers, have been very successful in various tasks recently. The performance of a model on a given task depends on its ability to memorize and generalize the training data. Large transformer models, which may have billions of parameters, in theory have a huge capacity to memorize content. However, the current algorithms for the optimization fall short of the theoretical capacity, and the capacity is also highly dependent on the content. In this paper, we focus on the memory capacity of these models obtained using common training algorithms and synthetic training data. Based on the results, we derive an empirical capacity model (ECM) for a generic transformer. The ECM can be used to design task-specific transformer models with an optimal number of parameters in cases where the target memorization capability of the task can be defined.
翻译:近年来,大型预训练自注意力神经网络(即Transformer模型)在各种任务中取得了显著成功。模型在特定任务上的性能取决于其对训练数据的记忆与泛化能力。理论上,具有数十亿参数的大型Transformer模型拥有巨大的记忆容量。然而,当前优化算法尚未达到理论容量极限,且实际容量高度依赖于数据内容。本文聚焦于采用常规训练算法与合成训练数据时,此类模型所表现出的记忆容量。基于实验结果,我们推导出通用Transformer的实证容量模型。该模型可用于在任务目标记忆能力可定义的情况下,设计具有最优参数规模的特定任务Transformer模型。