For training a video-based action recognition model that accepts multi-view video, annotating frame-level labels is tedious and difficult. However, it is relatively easy to annotate sequence-level labels. This kind of coarse annotations are called as weak labels. However, training a multi-view video-based action recognition model with weak labels for frame-level perception is challenging. In this paper, we propose a novel learning framework, where the weak labels are first used to train a multi-view video-based base model, which is subsequently used for downstream frame-level perception tasks. The base model is trained to obtain individual latent embeddings for each view in the multi-view input. For training the model using the weak labels, we propose a novel latent loss function. We also propose a model that uses the view-specific latent embeddings for downstream frame-level action recognition and detection tasks. The proposed framework is evaluated using the MM Office dataset by comparing several baseline algorithms. The results show that the proposed base model is effectively trained using weak labels and the latent embeddings help the downstream models improve accuracy.
翻译:对于训练接受多视角视频的视频动作识别模型而言,标注帧级标签既繁琐又困难。然而,标注序列级标签相对容易。这种粗粒度的标注被称为弱标签。然而,利用弱标签训练用于帧级感知的多视角视频动作识别模型具有挑战性。本文提出了一种新型学习框架:首先使用弱标签训练基于多视角视频的基础模型,随后将其用于下游帧级感知任务。该基础模型通过训练为多视角输入中的每个视角获取独立的潜在嵌入表示。针对弱标签训练过程,我们提出了一种新型潜在损失函数。同时,我们还设计了一种利用视角特定潜在嵌入执行下游帧级动作识别与检测任务的模型。通过在MM Office数据集上与多种基线算法进行对比评估,结果表明:所提出的基础模型能有效利用弱标签进行训练,且潜在嵌入有助于提升下游模型的准确率。