Transformers have achieved the state-of-the-art performance on solving the inverse problem of Snapshot Compressive Imaging (SCI) for video, whose ill-posedness is rooted in the mixed degradation of spatial masking and temporal aliasing. However, previous Transformers lack an insight into the degradation and thus have limited performance and efficiency. In this work, we tailor an efficient reconstruction architecture without temporal aggregation in early layers and Hierarchical Separable Video Transformer (HiSViT) as building block. HiSViT is built by multiple groups of Cross-Scale Separable Multi-head Self-Attention (CSS-MSA) and Gated Self-Modulated Feed-Forward Network (GSM-FFN) with dense connections, each of which is conducted within a separate channel portions at a different scale, for multi-scale interactions and long-range modeling. By separating spatial operations from temporal ones, CSS-MSA introduces an inductive bias of paying more attention within frames instead of between frames while saving computational overheads. GSM-FFN further enhances the locality via gated mechanism and factorized spatial-temporal convolutions. Extensive experiments demonstrate that our method outperforms previous methods by $\!>\!0.5$ dB with comparable or fewer parameters and complexity. The source codes and pretrained models are released at https://github.com/pwangcs/HiSViT.
翻译:Transformer在解决视频快照压缩成像的逆问题上已达到最先进性能,该问题的病态性源于空间掩蔽与时间混叠的混合退化。然而,先前Transformer对退化机制缺乏深入洞察,导致其性能与效率受限。本研究构建了一种在浅层无需时间聚合的高效重建架构,并以分层可分离视频Transformer作为核心模块。HiSViT由多组跨尺度可分离多头自注意力模块与门控自调制前馈网络通过密集连接构成,每个模块在不同尺度下对分离的通道子集进行处理,以实现多尺度交互与长程建模。通过将空间操作与时间操作解耦,CSS-MSA引入了更关注帧内而非帧间关系的归纳偏置,同时降低了计算开销。GSM-FFN进一步通过门控机制与分解的时空卷积增强局部性。大量实验表明,本方法在参数量与复杂度相当或更少的情况下,性能超越先前方法超过0.5 dB。源代码与预训练模型发布于https://github.com/pwangcs/HiSViT。