Transformer models have demonstrated remarkable success in many domains such as natural language processing (NLP) and computer vision. With the growing interest in transformer-based architectures, they are now utilized for gesture recognition. So, we also explore and devise a novel ConvMixFormer architecture for dynamic hand gestures. The transformers use quadratic scaling of the attention features with the sequential data, due to which these models are computationally complex and heavy. We have considered this drawback of the transformer and designed a resource-efficient model that replaces the self-attention in the transformer with the simple convolutional layer-based token mixer. The computational cost and the parameters used for the convolution-based mixer are comparatively less than the quadratic self-attention. Convolution-mixer helps the model capture the local spatial features that self-attention struggles to capture due to their sequential processing nature. Further, an efficient gate mechanism is employed instead of a conventional feed-forward network in the transformer to help the model control the flow of features within different stages of the proposed model. This design uses fewer learnable parameters which is nearly half the vanilla transformer that helps in fast and efficient training. The proposed method is evaluated on NVidia Dynamic Hand Gesture and Briareo datasets and our model has achieved state-of-the-art results on single and multimodal inputs. We have also shown the parameter efficiency of the proposed ConvMixFormer model compared to other methods. The source code is available at https://github.com/mallikagarg/ConvMixFormer.
翻译:Transformer模型在自然语言处理(NLP)和计算机视觉等众多领域已展现出卓越的成功。随着对基于Transformer架构的兴趣日益增长,它们现已被应用于手势识别任务。因此,我们也探索并设计了一种用于动态手势识别的新型ConvMixFormer架构。Transformer在处理序列数据时,其注意力特征的计算具有二次复杂度,导致这些模型计算复杂且负担沉重。我们考虑了Transformer的这一缺点,设计了一个资源高效的模型,该模型使用基于简单卷积层的令牌混合器替代了Transformer中的自注意力机制。基于卷积的混合器在计算成本和参数量上均显著低于二次复杂度的自注意力。卷积混合器有助于模型捕获局部空间特征,而自注意力由于其序列处理特性难以有效捕捉此类特征。此外,我们在所提模型的不同阶段采用了一种高效的门控机制,以替代Transformer中传统的前馈网络,这有助于模型控制特征流。该设计使用了更少的可学习参数,其数量近乎原始Transformer的一半,从而实现了快速高效的训练。所提方法在NVidia动态手势数据集和Briareo数据集上进行了评估,我们的模型在单模态和多模态输入上均取得了最先进的结果。我们还展示了所提ConvMixFormer模型相较于其他方法的参数效率优势。源代码发布于https://github.com/mallikagarg/ConvMixFormer。