Accurate human activity and trajectory prediction are crucial for ensuring safe and reliable human-robot interactions in dynamic environments, such as industrial settings, with mobile robots. Datasets with fine-grained action labels for moving people in industrial environments with mobile robots are scarce, as most existing datasets focus on social navigation in public spaces. This paper introduces the TH\"OR-MAGNI Act dataset, a substantial extension of the TH\"OR-MAGNI dataset, which captures participant movements alongside robots in diverse semantic and spatial contexts. TH\"OR-MAGNI Act provides 8.3 hours of manually labeled participant actions derived from egocentric videos recorded via eye-tracking glasses. These actions, aligned with the provided TH\"OR-MAGNI motion cues, follow a long-tailed distribution with diversified acceleration, velocity, and navigation distance profiles. We demonstrate the utility of TH\"OR-MAGNI Act for two tasks: action-conditioned trajectory prediction and joint action and trajectory prediction. We propose two efficient transformer-based models that outperform the baselines to address these tasks. These results underscore the potential of TH\"OR-MAGNI Act to develop predictive models for enhanced human-robot interaction in complex environments.
翻译:在动态环境(如配备移动机器人的工业场景)中,精确的人类活动与轨迹预测对于确保安全可靠的人机交互至关重要。目前,针对移动机器人工业环境中人员细粒度行为标注的数据集十分稀缺,因为现有数据集大多聚焦于公共空间中的社会性导航。本文介绍了THÖR-MAGNI Act数据集,该数据集是对THÖR-MAGNI数据集的重大扩展,记录了参与者在多样化语义与空间情境下与机器人协同移动的行为。THÖR-MAGNI Act提供了8.3小时基于眼动追踪眼镜记录的第一人称视频进行人工标注的参与者行为数据。这些行为与THÖR-MAGNI提供的运动线索对齐,呈现出长尾分布特征,并具有多样化的加速度、速度及导航距离分布模式。我们通过两项任务验证了THÖR-MAGNI Act的实用价值:基于行为条件的轨迹预测,以及行为与轨迹的联合预测。针对这些任务,我们提出了两种高效的基于Transformer的模型,其性能均超越基线方法。这些结果凸显了THÖR-MAGNI Act在开发复杂环境下增强人机交互的预测模型方面的潜力。