Artificial neural networks have been shown to be state-of-the-art machine learning models in a wide variety of applications, including natural language processing and image recognition. However, building a performant neural network is a laborious task and requires substantial computing power. Neural Architecture Search (NAS) addresses this issue by an automatic selection of the optimal network from a set of potential candidates. While many NAS methods still require training of (some) neural networks, zero-cost proxies promise to identify the optimal network without training. In this work, we propose the zero-cost proxy Network Expressivity by Activation Rank (NEAR). It is based on the effective rank of the pre- and post-activation matrix, i.e., the values of a neural network layer before and after applying its activation function. We demonstrate the cutting-edge correlation between this network score and the model accuracy on NAS-Bench-101 and NATS-Bench-SSS/TSS. In addition, we present a simple approach to estimate the optimal layer sizes in multi-layer perceptrons. Furthermore, we show that this score can be utilized to select hyperparameters such as the activation function and the neural network weight initialization scheme.
翻译:人工神经网络在自然语言处理和图像识别等广泛应用中已被证明是最先进的机器学习模型。然而,构建高性能神经网络是一项费力的任务,并且需要大量的计算能力。神经架构搜索通过从一组潜在候选网络中自动选择最优网络来解决这一问题。虽然许多NAS方法仍然需要(部分)神经网络的训练,但零成本代理方法有望在不进行训练的情况下识别最优网络。在本工作中,我们提出了基于激活秩的网络表达能力零成本代理方法。该方法基于激活前后矩阵的有效秩,即神经网络层在应用其激活函数之前和之后的数值。我们在NAS-Bench-101和NATS-Bench-SSS/TSS基准上证明了该网络评分与模型精度之间的前沿相关性。此外,我们提出了一种简单的方法来估计多层感知机中的最优层尺寸。进一步地,我们展示了该评分可用于选择超参数,例如激活函数和神经网络权重初始化方案。