We present Ego-Exo4D, a diverse, large-scale multimodal multiview video dataset and benchmark challenge. Ego-Exo4D centers around simultaneously-captured egocentric and exocentric video of skilled human activities (e.g., sports, music, dance, bike repair). 740 participants from 13 cities worldwide performed these activities in 123 different natural scene contexts, yielding long-form captures from 1 to 42 minutes each and 1,286 hours of video combined. The multimodal nature of the dataset is unprecedented: the video is accompanied by multichannel audio, eye gaze, 3D point clouds, camera poses, IMU, and multiple paired language descriptions -- including a novel "expert commentary" done by coaches and teachers and tailored to the skilled-activity domain. To push the frontier of first-person video understanding of skilled human activity, we also present a suite of benchmark tasks and their annotations, including fine-grained activity understanding, proficiency estimation, cross-view translation, and 3D hand/body pose. All resources are open sourced to fuel new research in the community. Project page: http://ego-exo4d-data.org/
翻译:我们提出了Ego-Exo4D,这是一个多样化、大规模的多模态多视角视频数据集及基准挑战。Ego-Exo4D围绕同步捕获的自我中心和外部中心视角的熟练人类活动视频(如运动、音乐、舞蹈、自行车修理)展开。来自全球13个城市的740名参与者在123种不同的自然场景中执行这些活动,产生了时长从1到42分钟不等的长周期采集数据,总计1,286小时的视频。该数据集的多模态特性前所未有:视频伴随有多通道音频、眼动注视、3D点云、相机姿态、惯性测量单元(IMU)以及多组配对语言描述——包括由教练和教师定制的、针对熟练活动领域的新型“专家评论”。为了推动第一人称视频理解熟练人类活动的前沿研究,我们还提出了一套基准任务及其注释,包括细粒度活动理解、熟练程度估计、跨视角转换以及3D手部/身体姿态。所有资源均已开源,以激发社区的新研究。项目页面:http://ego-exo4d-data.org/