Landmark universal function approximation results for neural networks with trained weights and biases provided impetus for the ubiquitous use of neural networks as learning models in Artificial Intelligence (AI) and neuroscience. Recent work has pushed the bounds of universal approximation by showing that arbitrary functions can similarly be learned by tuning smaller subsets of parameters, for example the output weights, within randomly initialized networks. Motivated by the fact that biases can be interpreted as biologically plausible mechanisms for adjusting unit outputs in neural networks, such as tonic inputs or activation thresholds, we investigate the expressivity of neural networks with random weights where only biases are optimized. We provide theoretical and numerical evidence demonstrating that feedforward neural networks with fixed random weights can be trained to perform multiple tasks by learning biases only. We further show that an equivalent result holds for recurrent neural networks predicting dynamical system trajectories. Our results are relevant to neuroscience, where they demonstrate the potential for behaviourally relevant changes in dynamics without modifying synaptic weights, as well as for AI, where they shed light on multi-task methods such as bias fine-tuning and unit masking.
翻译:具有训练权重和偏置的神经网络的里程碑式通用函数逼近结果,推动了神经网络作为学习模型在人工智能(AI)和神经科学中的普遍应用。最近的研究通过证明在随机初始化的网络中,仅调整较小的参数子集(例如输出权重)即可类似地学习任意函数,从而推动了通用逼近的边界。受偏置可被解释为神经网络中调整单元输出的生物学合理机制(如紧张性输入或激活阈值)这一事实的启发,我们研究了仅优化偏置的随机权重神经网络的表达能力。我们提供了理论和数值证据,证明具有固定随机权重的前馈神经网络可以通过仅学习偏置来训练以执行多个任务。我们进一步证明,对于预测动态系统轨迹的循环神经网络,存在等效的结果。我们的研究结果与神经科学相关,它们展示了在不修改突触权重的情况下实现与行为相关的动力学变化的潜力;同时也与AI相关,它们揭示了诸如偏置微调和单元掩码等多任务方法的原理。