This paper presents the ARN-LSTM architecture, a novel multi-stream action recognition model designed to address the challenge of simultaneously capturing spatial motion and temporal dynamics in action sequences. Traditional methods often focus solely on spatial or temporal features, limiting their ability to comprehend complex human activities fully. Our proposed model integrates joint, motion, and temporal information through a multi-stream fusion architecture. Specifically, it comprises a jointstream for extracting skeleton features, a temporal stream for capturing dynamic temporal features, and an ARN-LSTM block that utilizes Time-Distributed Long Short-Term Memory (TD-LSTM) layers followed by an Attention Relation Network (ARN) to model temporal relations. The outputs from these streams are fused in a fully connected layer to provide the final action prediction. Evaluations on the NTU RGB+D 60 and NTU RGB+D 120 datasets outperform the superior performance of our model, particularly in group activity recognition.
翻译:本文提出ARN-LSTM架构,这是一种新颖的多流动作识别模型,旨在解决同时捕捉动作序列中空间运动与时间动态的挑战。传统方法通常仅关注空间或时间特征,限制了其全面理解复杂人类活动的能力。我们提出的模型通过多流融合架构整合关节、运动和时间信息。具体而言,它包含一个用于提取骨架特征的关节流、一个用于捕捉动态时间特征的时间流,以及一个ARN-LSTM模块,该模块利用时间分布长短期记忆(TD-LSTM)层和注意力关系网络(ARN)来建模时间关系。这些流的输出在全连接层中进行融合,以提供最终的动作预测。在NTU RGB+D 60和NTU RGB+D 120数据集上的评估结果表明,我们的模型性能优越,尤其在群体活动识别方面表现突出。