Simultaneous Localization and Mapping (SLAM) technology has been widely applied in various robotic scenarios, from rescue operations to autonomous driving. However, the generalization of SLAM algorithms remains a significant challenge, as current datasets often lack scalability in terms of platforms and environments. To address this limitation, we present FusionPortableV2, a multi-sensor SLAM dataset featuring sensor diversity, varied motion patterns, and a wide range of environmental scenarios. Our dataset comprises $27$ sequences, spanning over $2.5$ hours and collected from four distinct platforms: a handheld suite, a legged robots, a unmanned ground vehicle (UGV), and a vehicle. These sequences cover diverse settings, including buildings, campuses, and urban areas, with a total length of $38.7km$. Additionally, the dataset includes ground-truth (GT) trajectories and RGB point cloud maps covering approximately $0.3km^2$. To validate the utility of our dataset in advancing SLAM research, we assess several state-of-the-art (SOTA) SLAM algorithms. Furthermore, we demonstrate the dataset's broad application beyond traditional SLAM tasks by investigating its potential for monocular depth estimation. The complete dataset, including sensor data, GT, and calibration details, is accessible at https://fusionportable.github.io/dataset/fusionportable_v2.
翻译:同步定位与建图(SLAM)技术已广泛应用于从救援行动到自动驾驶的各种机器人场景。然而,SLAM算法的泛化能力仍然是一个重大挑战,因为现有数据集通常在平台和环境的可扩展性方面存在不足。为突破这一局限,我们提出了FusionPortableV2——一个具备传感器多样性、运动模式多变且覆盖广泛环境场景的多传感器SLAM数据集。本数据集包含27条序列,总时长超过2.5小时,采集自四个独立平台:手持式设备套件、腿式机器人、无人地面车辆(UGV)以及有人驾驶车辆。这些序列涵盖建筑、校园、城市街区等多种场景,总路径长达38.7公里。此外,数据集提供了覆盖约0.3平方公里的真实轨迹(GT)与RGB点云地图。为验证本数据集对推进SLAM研究的实用性,我们评估了多种前沿(SOTA)SLAM算法。通过探索其在单目深度估计任务中的潜力,我们进一步展示了该数据集在传统SLAM任务之外的广泛适用性。完整数据集(包括传感器数据、GT及标定细节)可通过https://fusionportable.github.io/dataset/fusionportable_v2获取。