Foundation models (FMs) are large neural networks trained on broad datasets, excelling in downstream tasks with minimal fine-tuning. Human activity recognition in video has advanced with FMs, driven by competition among different architectures. However, high accuracies on standard benchmarks can draw an artificially rosy picture, as they often overlook real-world factors like changing camera perspectives. Popular benchmarks, mostly from YouTube or movies, offer diverse views but only coarse actions, which are insufficient for use-cases needing fine-grained, domain-specific actions. Domain-specific datasets (e.g., for industrial assembly) typically use data from limited static perspectives. This paper empirically evaluates how perspective changes affect different FMs in fine-grained human activity recognition. We compare multiple backbone architectures and design choices, including image- and video- based models, and various strategies for temporal information fusion, including commonly used score averaging and more novel attention-based temporal aggregation mechanisms. This is the first systematic study of different foundation models and specific design choices for human activity recognition from unknown views, conducted with the goal to provide guidance for backbone- and temporal- fusion scheme selection. Code and models will be made publicly available to the community.
翻译:基础模型(FMs)是在广泛数据集上训练的大型神经网络,能够通过少量微调在下游任务中表现出色。在视频中的人类活动识别领域,基础模型的应用已取得显著进展,这主要得益于不同架构之间的竞争。然而,标准基准测试中的高准确率可能描绘出过于乐观的图景,因为它们往往忽略了现实世界中的因素,如摄像机视角的变化。流行的基准数据集大多来自YouTube或电影,提供了多样化的视角,但仅包含粗粒度的动作,这对于需要细粒度、特定领域动作的应用场景而言是不够的。特定领域的数据集(例如用于工业装配的数据集)通常使用来自有限静态视角的数据。本文通过实证研究评估了视角变化如何影响不同基础模型在细粒度人类活动识别中的表现。我们比较了多种骨干架构和设计选择,包括基于图像和视频的模型,以及多种时间信息融合策略,包括常用的得分平均方法和更为新颖的基于注意力的时间聚合机制。这是首次针对未知视角下人类活动识别的不同基础模型及具体设计选择进行的系统性研究,旨在为骨干网络和时间融合方案的选择提供指导。代码和模型将向社区公开。