Most existing mobile robotic datasets primarily capture static scenes, limiting their utility for evaluating robotic performance in dynamic environments. To address this, we present a mobile robot oriented large-scale indoor dataset, denoted as THUD++ (TsingHua University Dynamic) robotic dataset, for dynamic scene understanding. Our current dataset includes 13 large-scale dynamic scenarios, combining both real-world and synthetic data collected with a real robot platform and a physical simulation platform, respectively. The RGB-D dataset comprises over 90K image frames, 20M 2D/3D bounding boxes of static and dynamic objects, camera poses, and IMU. The trajectory dataset covers over 6,000 pedestrian trajectories in indoor scenes. Additionally, the dataset is augmented with a Unity3D-based simulation platform, allowing researchers to create custom scenes and test algorithms in a controlled environment. We evaluate state-of-the-art methods on THUD++ across mainstream indoor scene understanding tasks, e.g., 3D object detection, semantic segmentation, relocalization, pedestrian trajectory prediction, and navigation. Our experiments highlight the challenges mobile robots encounter in indoor environments, especially when navigating in complex, crowded, and dynamic scenes. By sharing this dataset, we aim to accelerate the development and testing of mobile robot algorithms, contributing to real-world robotic applications.
翻译:现有移动机器人数据集主要捕获静态场景,限制了其在动态环境中评估机器人性能的实用性。为此,我们提出了一个面向移动机器人的大规模室内数据集,称为THUD++(清华大学动态)机器人数据集,用于动态场景理解。当前数据集包含13个大规模动态场景,结合了分别通过真实机器人平台和物理仿真平台采集的真实世界数据与合成数据。该RGB-D数据集包含超过9万帧图像、2000万个静态与动态物体的2D/3D边界框、相机位姿以及IMU数据。轨迹数据集涵盖了室内场景中超过6000条行人轨迹。此外,数据集还配备了一个基于Unity3D的仿真平台,使研究人员能够在受控环境中创建自定义场景并测试算法。我们在THUD++上评估了主流室内场景理解任务(如3D物体检测、语义分割、重定位、行人轨迹预测与导航)的先进方法。实验突显了移动机器人在室内环境中,尤其是在复杂、拥挤和动态场景中导航时所面临的挑战。通过共享此数据集,我们旨在加速移动机器人算法的开发与测试,推动现实世界机器人应用的发展。