Motion deblurring addresses the challenge of image blur caused by camera or scene movement. Event cameras provide motion information that is encoded in the asynchronous event streams. To efficiently leverage the temporal information of event streams, we employ Spiking Neural Networks (SNNs) for motion feature extraction and Artificial Neural Networks (ANNs) for color information processing. Due to the non-uniform distribution and inherent redundancy of event data, existing cross-modal feature fusion methods exhibit certain limitations. Inspired by the visual attention mechanism in the human visual system, this study introduces a bioinspired dual-drive hybrid network (BDHNet). Specifically, the Neuron Configurator Module (NCM) is designed to dynamically adjusts neuron configurations based on cross-modal features, thereby focusing the spikes in blurry regions and adapting to varying blurry scenarios dynamically. Additionally, the Region of Blurry Attention Module (RBAM) is introduced to generate a blurry mask in an unsupervised manner, effectively extracting motion clues from the event features and guiding more accurate cross-modal feature fusion. Extensive subjective and objective evaluations demonstrate that our method outperforms current state-of-the-art methods on both synthetic and real-world datasets.
翻译:运动去模糊旨在解决由相机或场景运动引起的图像模糊问题。事件相机通过异步事件流编码运动信息。为高效利用事件流的时间信息,我们采用脉冲神经网络(SNNs)进行运动特征提取,并利用人工神经网络(ANNs)处理色彩信息。由于事件数据的非均匀分布和固有冗余性,现有跨模态特征融合方法存在一定局限性。受人类视觉系统中视觉注意机制的启发,本研究提出一种仿生双驱动混合网络(BDHNet)。具体而言,神经元配置模块(NCM)被设计为基于跨模态特征动态调整神经元配置,从而将脉冲聚焦于模糊区域,并动态适应不同模糊场景。此外,本研究引入模糊注意力区域模块(RBAM),以无监督方式生成模糊掩码,有效从事件特征中提取运动线索,并引导更精确的跨模态特征融合。大量主观与客观评估表明,我们的方法在合成数据集和真实数据集上均优于当前最先进的方法。