Perception in fields like robotics, manufacturing, and data analysis generates large volumes of temporal and spatial data to effectively capture their environments. However, sorting through this data for specific scenarios is a meticulous and error-prone process, often dependent on the application, and lacks generality and reproducibility. In this work, we introduce SpREs as a novel querying language for pattern matching over perception streams containing spatial and temporal data derived from multi-modal dynamic environments. To highlight the capabilities of SpREs, we developed the STREM tool as both an offline and online pattern matching framework for perception data. We demonstrate the offline capabilities of STREM through a case study on a publicly available AV dataset (Woven Planet Perception) and its online capabilities through a case study integrating STREM in ROS with the CARLA simulator. We also conduct performance benchmark experiments on various SpRE queries. Using our matching framework, we are able to find over 20,000 matches within 296 ms making STREM applicable in runtime monitoring applications.
翻译:在机器人学、制造业和数据分析等领域,感知系统为有效捕获环境信息会产生大量时空数据。然而,从这些数据中筛选特定场景是一个细致且易出错的过程,通常依赖于具体应用,缺乏通用性和可复现性。本研究提出空间正则表达式(SpREs)作为一种新型查询语言,用于对包含来自多模态动态环境的时空数据的感知流进行模式匹配。为展现SpREs的能力,我们开发了STREM工具,作为感知数据的离线和在线模式匹配框架。我们通过公开自动驾驶数据集(Woven Planet Perception)的案例研究展示了STREM的离线功能,并通过将STREM集成到ROS与CARLA模拟器的案例研究展示了其在线功能。我们还针对多种SpRE查询进行了性能基准实验。使用我们的匹配框架,可在296毫秒内找到超过20,000个匹配结果,这使得STREM适用于运行时监测应用。