Typical Convolutional Neural Networks (ConvNets) depend heavily on large amounts of image data and resort to an iterative optimization algorithm (e.g., SGD or Adam) to learn network parameters, which makes training very time- and resource-intensive. In this paper, we propose a new training paradigm and formulate the parameter learning of ConvNets into a prediction task: given a ConvNet architecture, we observe there exist correlations between image datasets and their corresponding optimal network parameters, and explore if we can learn a hyper-mapping between them to capture the relations, such that we can directly predict the parameters of the network for an image dataset never seen during the training phase. To do this, we put forward a new hypernetwork based model, called PudNet, which intends to learn a mapping between datasets and their corresponding network parameters, and then predicts parameters for unseen data with only a single forward propagation. Moreover, our model benefits from a series of adaptive hyper recurrent units sharing weights to capture the dependencies of parameters among different network layers. Extensive experiments demonstrate that our proposed method achieves good efficacy for unseen image datasets on two kinds of settings: Intra-dataset prediction and Inter-dataset prediction. Our PudNet can also well scale up to large-scale datasets, e.g., ImageNet-1K. It takes 8967 GPU seconds to train ResNet-18 on the ImageNet-1K using GC from scratch and obtain a top-5 accuracy of 44.65%. However, our PudNet costs only 3.89 GPU seconds to predict the network parameters of ResNet-18 achieving comparable performance (44.92%), more than 2,300 times faster than the traditional training paradigm.
翻译:典型的卷积神经网络(ConvNets)严重依赖大量图像数据,并采用迭代优化算法(如SGD或Adam)来学习网络参数,这使得训练过程非常耗时且资源密集。本文提出一种新的训练范式,将ConvNets的参数学习构建为一项预测任务:给定一个ConvNet架构,我们观察到图像数据集与其对应的最优网络参数之间存在相关性,并探究是否能够学习二者之间的超映射以捕获这些关系,从而能够直接为训练阶段从未见过的图像数据集预测网络参数。为此,我们提出一种基于超网络的新模型,称为PudNet,旨在学习数据集与其对应网络参数之间的映射,进而仅通过单次前向传播即可为未见数据预测参数。此外,我们的模型受益于一系列共享权重的自适应超循环单元,以捕获不同网络层间参数的依赖关系。大量实验表明,我们提出的方法在两种设置下对未见图像数据集均取得了良好效果:数据集内预测和数据集间预测。我们的PudNet也能很好地扩展到大规模数据集,例如ImageNet-1K。使用GC从头开始在ImageNet-1K上训练ResNet-18需要8967 GPU秒,并获得44.65%的top-5准确率。然而,我们的PudNet仅需3.89 GPU秒即可预测ResNet-18的网络参数,并达到相当的性能(44.92%),比传统训练范式快2300倍以上。