Event cameras provide sparse yet temporally high-resolution motion information, demonstrating great potential for motion deblurring. However, the delicate events are highly susceptible to noise. Although noise can be reduced by raising the threshold of Dynamic Vision Sensors (DVS), this inevitably causes under-reporting of events. Most existing event-guided deblurring methods overlook this practical trade-off, and the indiscriminate feature extraction and naive fusion result in unstable and mixed representations and ultimately unsatisfactory performance. To tackle these challenges, we propose a Robust Event-guided Deblurring (RED) network with modality-specific disentangled representation. First, we introduce a Robustness-Oriented Perturbation Strategy (RPS) that mimics various DVS thresholds, exposing RED to diverse under-reporting patterns and thereby fostering robustness under unknown conditions. With an adaption to RPS, a Modality-specific Representation Mechanism (MRM) is designed to explicitly model semantic understanding, motion priors, and cross-modality correlations from two inherently distinct but complementary sources: blurry images and partially disrupted events. Building on these reliable features, two interactive modules are presented to enhance motion-sensitive areas in blurry images and inject semantic context into under-reporting event representations. Extensive experiments on synthetic and real-world datasets demonstrate RED consistently achieves state-of-the-art performance in terms of both accuracy and robustness.
翻译:事件相机提供稀疏但时间分辨率高的运动信息,在运动去模糊领域展现出巨大潜力。然而,事件信号极为脆弱,极易受噪声干扰。虽然可通过提高动态视觉传感器(DVS)阈值来抑制噪声,但这不可避免地导致事件欠报告。现有大多数事件引导去模糊方法忽视了这一实际权衡,其无差别的特征提取与简单融合会导致不稳定且混杂的表示,最终影响性能。为应对这些挑战,我们提出一种具有模态特定解耦表示的鲁棒事件引导去模糊(RED)网络。首先,我们引入面向鲁棒性的扰动策略(RPS),通过模拟不同DVS阈值使RED接触多样化的欠报告模式,从而增强其在未知条件下的鲁棒性。配合RPS的适配,我们设计了模态特定表示机制(MRM),从两个本质不同但互补的源——模糊图像与部分受损事件中,显式建模语义理解、运动先验及跨模态关联。基于这些可靠特征,我们提出两个交互模块,分别用于增强模糊图像中的运动敏感区域,并向欠报告事件表示中注入语义上下文。在合成与真实数据集上的大量实验表明,RED在精度与鲁棒性方面均持续取得最先进的性能。