When autonomous systems are deployed in real-world scenarios, sensors are often subject to limited field-of-view (FOV) constraints, either naturally through system design, or through unexpected occlusions or sensor failures. In conditions where a large FOV is unavailable, it is important to be able to infer information about the environment and predict the state of nearby surroundings based on available data to maintain safe and accurate operation. In this work, we explore the effectiveness of deep learning for dynamic map state prediction based on limited FOV time series data. We show that by representing dynamic sensor data in a simple single-image format that captures both spatial and temporal information, we can effectively use a wide variety of existing image-to-image learning models to predict map states with high accuracy in a diverse set of sensing scenarios.
翻译:当自主系统部署于现实场景时,传感器常受限于有限的视场(FOV)约束,这既可能源于系统设计的固有特性,也可能由意外的遮挡或传感器故障导致。在大视场不可用的条件下,基于可用数据推断环境信息并预测周边动态状态,对于维持系统安全与精确运行至关重要。本研究探索了基于有限视场时序数据,利用深度学习进行动态地图状态预测的有效性。我们证明,通过将动态传感器数据表示为同时捕捉时空信息的简易单帧图像格式,能够有效利用多种现有的图像到图像学习模型,在多样化的传感场景中实现高精度的地图状态预测。