Human action recognition has become an important research focus in computer vision due to the wide range of applications where it is used. 3D Resnet-based CNN models, particularly MC3, R3D, and R(2+1)D, have different convolutional filters to extract spatiotemporal features. This paper investigates the impact of reducing the captured knowledge from temporal data, while increasing the resolution of the frames. To establish this experiment, we created similar designs to the three originals, but with a dropout layer added before the final classifier. Secondly, we then developed ten new versions for each one of these three designs. The variants include special attention blocks within their architecture, such as convolutional block attention module (CBAM), temporal convolution networks (TCN), in addition to multi-headed and channel attention mechanisms. The purpose behind that is to observe the extent of the influence each of these blocks has on performance for the restricted-temporal models. The results of testing all the models on UCF101 have shown accuracy of 88.98% for the variant with multiheaded attention added to the modified R(2+1)D. This paper concludes the significance of missing temporal features in the performance of the newly created increased resolution models. The variants had different behavior on class-level accuracy, despite the similarity of their enhancements to the overall performance.
翻译:人类行为识别因其广泛的应用场景已成为计算机视觉领域的重要研究方向。基于三维残差网络的卷积神经网络模型,特别是MC3、R3D和R(2+1)D,采用不同的卷积滤波器提取时空特征。本文研究了在提升帧分辨率的同时减少从时序数据中捕获知识所产生的影响。为开展实验,我们首先参照三种原始模型创建了类似结构,但在最终分类器前加入了丢弃层。其次,我们为这三种设计各自开发了十个新版本。这些变体在架构中引入了特殊注意力模块,包括卷积块注意力模块(CBAM)、时序卷积网络(TCN),以及多头注意力和通道注意力机制。此举旨在观察这些模块对时序受限模型性能的影响程度。在UCF101数据集上测试所有模型的结果表明,在改进的R(2+1)D模型中添加多头注意力的变体达到了88.98%的准确率。本文论证了缺失时序特征对新建高分辨率模型性能的重要影响。尽管各变体在整体性能上的改进相似,但在类别级准确率上表现出不同的行为特征。