Skeleton-based Temporal Action Segmentation (STAS) aims to segment and recognize various actions from long, untrimmed sequences of human skeletal movements. Current STAS methods typically employ spatio-temporal modeling to establish dependencies among joints as well as frames, and utilize one-hot encoding with cross-entropy loss for frame-wise classification supervision. However, these methods overlook the intrinsic correlations among joints and actions within skeletal features, leading to a limited understanding of human movements. To address this, we propose a Text-Derived Relational Graph-Enhanced Network (TRG-Net) that leverages prior graphs generated by Large Language Models (LLM) to enhance both modeling and supervision. For modeling, the Dynamic Spatio-Temporal Fusion Modeling (DSFM) method incorporates Text-Derived Joint Graphs (TJG) with channel- and frame-level dynamic adaptation to effectively model spatial relations, while integrating spatio-temporal core features during temporal modeling. For supervision, the Absolute-Relative Inter-Class Supervision (ARIS) method employs contrastive learning between action features and text embeddings to regularize the absolute class distributions, and utilizes Text-Derived Action Graphs (TAG) to capture the relative inter-class relationships among action features. Additionally, we propose a Spatial-Aware Enhancement Processing (SAEP) method, which incorporates random joint occlusion and axial rotation to enhance spatial generalization. Performance evaluations on four public datasets demonstrate that TRG-Net achieves state-of-the-art results.
翻译:骨架时序动作分割旨在从长时未修剪的人体骨架运动序列中分割并识别多种动作。现有方法通常使用时空建模建立关节间与帧间的依赖关系,并采用独热编码与交叉熵损失进行逐帧分类监督。然而,这些方法忽略了骨架特征中关节与动作间的内在关联,导致对人体运动的理解受限。为此,我们提出一种基于文本衍生关系图增强的网络,其利用大语言模型生成的先验图来增强建模与监督过程。在建模方面,动态时空融合建模方法融合了文本衍生的关节关系图,并通过通道级与帧级的动态自适应有效建模空间关系,同时在时序建模中整合时空核心特征。在监督方面,绝对-相对类间监督方法采用动作特征与文本嵌入间的对比学习以规范化绝对类别分布,并利用文本衍生的动作关系图捕捉动作特征间的相对类间关系。此外,我们提出一种空间感知增强处理方法,通过随机关节遮挡与轴向旋转来提升空间泛化能力。在四个公开数据集上的性能评估表明,所提网络取得了最先进的结果。