In this article, we propose a novel standalone hybrid Spiking-Convolutional Neural Network (SC-NN) model and test on using image inpainting tasks. Our approach uses the unique capabilities of SNNs, such as event-based computation and temporal processing, along with the strong representation learning abilities of CNNs, to generate high-quality inpainted images. The model is trained on a custom dataset specifically designed for image inpainting, where missing regions are created using masks. The hybrid model consists of SNNConv2d layers and traditional CNN layers. The SNNConv2d layers implement the leaky integrate-and-fire (LIF) neuron model, capturing spiking behavior, while the CNN layers capture spatial features. In this study, a mean squared error (MSE) loss function demonstrates the training process, where a training loss value of 0.015, indicates accurate performance on the training set and the model achieved a validation loss value as low as 0.0017 on the testing set. Furthermore, extensive experimental results demonstrate state-of-the-art performance, showcasing the potential of integrating temporal dynamics and feature extraction in a single network for image inpainting.
翻译:本文提出了一种新颖的独立混合脉冲-卷积神经网络(SC-NN)模型,并在图像修复任务上进行了测试。我们的方法利用脉冲神经网络(SNN)的独特能力(如基于事件的计算和时间处理),结合卷积神经网络(CNN)强大的表征学习能力,以生成高质量的修复图像。该模型在专门为图像修复设计的自定义数据集上进行训练,其中缺失区域通过掩码生成。混合模型由SNNConv2d层和传统CNN层组成。SNNConv2d层实现了泄漏积分发放(LIF)神经元模型,以捕捉脉冲行为,而CNN层则捕捉空间特征。在本研究中,采用均方误差(MSE)损失函数来展示训练过程,训练损失值为0.015,表明模型在训练集上表现准确,并且在测试集上取得了低至0.0017的验证损失值。此外,大量的实验结果证明了该模型达到了最先进的性能,展示了在单一网络中整合时间动态与特征提取用于图像修复的潜力。