Elements of neural networks, both biological and artificial, can be described by their selectivity for specific cognitive features. Understanding these features is important for understanding the inner workings of neural networks. For a living system, such as a neuron, whose response to a stimulus is unknown and not differentiable, the only way to reveal these features is through a feedback loop that exposes it to a large set of different stimuli. The properties of these stimuli should be varied iteratively in order to maximize the neuronal response. To utilize this feedback loop for a biological neural network, it is important to run it quickly and efficiently in order to reach the stimuli that maximizes certain neurons' activation with the least number of iterations possible. Here we present a framework with an efficient design for such a loop. We successfully tested it on an artificial spiking neural network (SNN), which is a model that simulates the asynchronous spiking activity of neurons in living brains. Our optimization method for activation maximization is based on the low-rank Tensor Train decomposition of the discrete activation function. The optimization space is the latent parameter space of images generated by SN-GAN or VQ-VAE generative models. To our knowledge, this is the first time that effective AM has been applied to SNNs. We track changes in the optimal stimuli for artificial neurons during training and show that highly selective neurons can form already in the early epochs of training and in the early layers of a convolutional spiking network. This formation of refined optimal stimuli is associated with an increase in classification accuracy. Some neurons, especially in the deeper layers, may gradually change the concepts they are selective for during learning, potentially explaining their importance for model performance.
翻译:神经网络的元素,无论是生物的还是人工的,都可以通过其对特定认知特征的选择性来描述。理解这些特征对于理解神经网络的内部工作机制至关重要。对于一个响应未知且不可微分的生命系统(例如神经元),揭示这些特征的唯一途径是通过一个反馈环路,使其暴露于大量不同的刺激之下。这些刺激的属性应被迭代地改变,以最大化神经元响应。为了在生物神经网络中利用这种反馈环路,快速高效地运行它以尽可能少的迭代次数达到最大化特定神经元激活的刺激至关重要。本文我们提出了一个为此类环路高效设计的框架。我们在人工脉冲神经网络(SNN)上成功测试了该框架,SNN是一种模拟活体大脑中神经元异步脉冲活动的模型。我们的激活最大化优化方法基于离散激活函数的低秩张量列车分解。优化空间是由SN-GAN或VQ-VAE生成模型生成的图像的潜在参数空间。据我们所知,这是首次将有效的激活最大化方法应用于SNN。我们追踪了训练过程中人工神经元最优刺激的变化,并表明高选择性神经元在训练的早期阶段和卷积脉冲网络的浅层即可形成。这种精细化最优刺激的形成与分类准确率的提升相关。一些神经元,尤其是在深层网络中,可能会在学习过程中逐渐改变其选择性对应的概念,这可能解释了它们对模型性能的重要性。