The ability of intelligent systems to predict human behaviors is crucial, particularly in fields such as autonomous vehicle navigation and social robotics. However, the complexity of human motion have prevented the development of a standardized dataset for human motion prediction, thereby hindering the establishment of pre-trained models. In this paper, we address these limitations by integrating multiple datasets, encompassing both trajectory and 3D pose keypoints, to propose a pre-trained model for human motion prediction. We merge seven distinct datasets across varying modalities and standardize their formats. To facilitate multimodal pre-training, we introduce Multi-Transmotion, an innovative transformer-based model designed for cross-modality pre-training. Additionally, we present a novel masking strategy to capture rich representations. Our methodology demonstrates competitive performance across various datasets on several downstream tasks, including trajectory prediction in the NBA and JTA datasets, as well as pose prediction in the AMASS and 3DPW datasets. The code is publicly available: https://github.com/vita-epfl/multi-transmotion
翻译:智能系统预测人类行为的能力至关重要,尤其是在自动驾驶导航和社交机器人等领域。然而,人体运动的复杂性阻碍了用于人体运动预测的标准化数据集的开发,从而也妨碍了预训练模型的建立。在本文中,我们通过整合多个包含轨迹和3D姿态关键点的数据集,提出了一个用于人体运动预测的预训练模型,以应对这些局限性。我们融合了七个不同模态的数据集并统一了它们的格式。为了促进多模态预训练,我们引入了Multi-Transmotion,这是一种创新的、基于Transformer的模型,专为跨模态预训练而设计。此外,我们提出了一种新颖的掩码策略来捕获丰富的表征。我们的方法在多个下游任务的各种数据集上展现了有竞争力的性能,包括在NBA和JTA数据集上的轨迹预测,以及在AMASS和3DPW数据集上的姿态预测。代码已公开:https://github.com/vita-epfl/multi-transmotion