Road traffic forecasting is crucial in real-world intelligent transportation scenarios like traffic dispatching and path planning in city management and personal traveling. Spatio-temporal graph neural networks (STGNNs) stand out as the mainstream solution in this task. Nevertheless, the quadratic complexity of remarkable dynamic spatial modeling-based STGNNs has become the bottleneck over large-scale traffic data. From the spatial data management perspective, we present a novel Transformer framework called PatchSTG to efficiently and dynamically model spatial dependencies for large-scale traffic forecasting with interpretability and fidelity. Specifically, we design a novel irregular spatial patching to reduce the number of points involved in the dynamic calculation of Transformer. The irregular spatial patching first utilizes the leaf K-dimensional tree (KDTree) to recursively partition irregularly distributed traffic points into leaf nodes with a small capacity, and then merges leaf nodes belonging to the same subtree into occupancy-equaled and non-overlapped patches through padding and backtracking. Based on the patched data, depth and breadth attention are used interchangeably in the encoder to dynamically learn local and global spatial knowledge from points in a patch and points with the same index of patches. Experimental results on four real world large-scale traffic datasets show that our PatchSTG achieves train speed and memory utilization improvements up to $10\times$ and $4\times$ with the state-of-the-art performance.
翻译:道路流量预测在城市管理及个人出行中的交通调度与路径规划等智能交通场景中至关重要。时空图神经网络(STGNNs)已成为该任务的主流解决方案。然而,基于动态空间建模的杰出STGNNs所具有的二次复杂度已成为处理大规模交通数据时的瓶颈。本文从空间数据管理的视角,提出一种名为PatchSTG的新型Transformer框架,能够以可解释性和高保真度高效动态建模大规模交通预测中的空间依赖性。具体而言,我们设计了一种新颖的不规则空间分块方法,以减少Transformer动态计算中涉及的点数。该不规则空间分块方法首先利用叶节点K维树(KDTree)递归地将不规则分布的交通点划分为容量较小的叶节点,随后通过填充与回溯将属于同一子树的叶节点合并为容量均衡且互不重叠的块。基于分块后的数据,编码器中交替使用深度注意力与广度注意力,分别从同一分块内的点以及不同分块中相同索引的点动态学习局部与全局空间知识。在四个真实世界大规模交通数据集上的实验结果表明,我们的PatchSTG在保持最优性能的同时,训练速度与内存利用率最高可提升$10\times$与$4\times$。