Transformer models have demonstrated remarkable success in many domains such as natural language processing (NLP) and computer vision. With the growing interest in transformer-based architectures, they are now utilized for gesture recognition. So, we also explore and devise a novel ConvMixFormer architecture for dynamic hand gestures. The transformers use quadratic scaling of the attention features with the sequential data, due to which these models are computationally complex and heavy. We have considered this drawback of the transformer and designed a resource-efficient model that replaces the self-attention in the transformer with the simple convolutional layer-based token mixer. The computational cost and the parameters used for the convolution-based mixer are comparatively less than the quadratic self-attention. Convolution-mixer helps the model capture the local spatial features that self-attention struggles to capture due to their sequential processing nature. Further, an efficient gate mechanism is employed instead of a conventional feed-forward network in the transformer to help the model control the flow of features within different stages of the proposed model. This design uses fewer learnable parameters which is nearly half the vanilla transformer that helps in fast and efficient training. The proposed method is evaluated on NVidia Dynamic Hand Gesture and Briareo datasets and our model has achieved state-of-the-art results on single and multimodal inputs. We have also shown the parameter efficiency of the proposed ConvMixFormer model compared to other methods. The source code is available at https://github.com/mallikagarg/ConvMixFormer.
翻译:Transformer模型在自然语言处理(NLP)和计算机视觉等诸多领域已展现出卓越的成功。随着基于Transformer架构的研究日益受到关注,它们现已被应用于手势识别任务。为此,我们同样探索并设计了一种用于动态手势识别的新型ConvMixFormer架构。传统Transformer因其注意力机制对序列数据采用二次复杂度缩放,导致模型计算复杂且参数量大。我们针对Transformer的这一缺陷,设计了一种资源高效的模型,使用基于简单卷积层的令牌混合器替代了Transformer中的自注意力机制。基于卷积的混合器在计算成本和参数量上均显著低于二次复杂度的自注意力机制。卷积混合器有助于模型捕获局部空间特征,而这正是自注意力机制因其序列处理特性难以有效捕捉的。此外,我们在所提模型的不同阶段采用高效的门控机制替代传统Transformer中的前馈网络,以调控特征流。该设计使用的可学习参数更少,约为原始Transformer的一半,有助于实现快速高效的训练。所提方法在NVidia动态手势数据集和Briareo数据集上进行了评估,我们的模型在单模态及多模态输入上均取得了最先进的性能。我们还展示了ConvMixFormer模型相较于其他方法的参数效率优势。源代码公开于:https://github.com/mallikagarg/ConvMixFormer。