This work identifies a simple pre-training mechanism that leads to representations exhibiting better continual and transfer learning. This mechanism -- the repeated resetting of weights in the last layer, which we nickname "zapping" -- was originally designed for a meta-continual-learning procedure, yet we show it is surprisingly applicable in many settings beyond both meta-learning and continual learning. In our experiments, we wish to transfer a pre-trained image classifier to a new set of classes, in a few shots. We show that our zapping procedure results in improved transfer accuracy and/or more rapid adaptation in both standard fine-tuning and continual learning settings, while being simple to implement and computationally efficient. In many cases, we achieve performance on par with state of the art meta-learning without needing the expensive higher-order gradients, by using a combination of zapping and sequential learning. An intuitive explanation for the effectiveness of this zapping procedure is that representations trained with repeated zapping learn features that are capable of rapidly adapting to newly initialized classifiers. Such an approach may be considered a computationally cheaper type of, or alternative to, meta-learning rapidly adaptable features with higher-order gradients. This adds to recent work on the usefulness of resetting neural network parameters during training, and invites further investigation of this mechanism.
翻译:本研究提出一种简单的预训练机制,能够产生具有更优持续学习与迁移学习能力的表征。该机制——我们称之为"zapping"的最后一层权重重复重置方法——最初是为元持续学习流程设计的,但我们发现其惊人地适用于元学习和持续学习之外的多种场景。在我们的实验中,我们期望将预训练图像分类器通过少量样本迁移到新的类别集合。实验表明,无论是在标准微调还是持续学习场景中,zapping机制都能提高迁移准确率和/或加速适应过程,同时实现简单且计算高效。通过结合zapping与序列学习,我们在许多情况下达到了与当前最优元学习方法相当的性能,且无需计算昂贵的高阶梯度。对此机制有效性的直观解释是:通过重复zapping训练的表征能够学习到快速适应新初始化分类器的特征能力。这种方法可被视为一种计算成本更低的元学习替代方案,无需高阶梯度即可获得快速适应特征。本研究为神经网络参数重置在训练中的有效性提供了新的证据,并呼吁对该机制开展进一步探索。