Graph Neural Networks (GNNs) are becoming increasingly popular for graph-based learning tasks such as point cloud processing due to their state-of-the-art (SOTA) performance. Nevertheless, the research community has primarily focused on improving model expressiveness, lacking consideration of how to design efficient GNN models for edge scenarios with real-time requirements and limited resources. Examining existing GNN models reveals varied execution across platforms and frequent Out-Of-Memory (OOM) problems, highlighting the need for hardware-aware GNN design. To address this challenge, this work proposes a novel hardware-aware graph neural architecture search framework tailored for resource constraint edge devices, namely HGNAS. To achieve hardware awareness, HGNAS integrates an efficient GNN hardware performance predictor that evaluates the latency and peak memory usage of GNNs in milliseconds. Meanwhile, we study GNN memory usage during inference and offer a peak memory estimation method, enhancing the robustness of architecture evaluations when combined with predictor outcomes. Furthermore, HGNAS constructs a fine-grained design space to enable the exploration of extreme performance architectures by decoupling the GNN paradigm. In addition, the multi-stage hierarchical search strategy is leveraged to facilitate the navigation of huge candidates, which can reduce the single search time to a few GPU hours. To the best of our knowledge, HGNAS is the first automated GNN design framework for edge devices, and also the first work to achieve hardware awareness of GNNs across different platforms. Extensive experiments across various applications and edge devices have proven the superiority of HGNAS. It can achieve up to a 10.6x speedup and an 82.5% peak memory reduction with negligible accuracy loss compared to DGCNN on ModelNet40.
翻译:图神经网络(GNNs)因其最先进的性能,在图结构学习任务(如点云处理)中日益流行。然而,研究界主要关注提升模型表达能力,缺乏对如何为具有实时性要求和资源受限的边缘场景设计高效GNN模型的考量。审视现有GNN模型发现,其在不同平台上的执行表现差异显著,且频繁出现内存溢出问题,这凸显了硬件感知的GNN设计需求。为应对这一挑战,本文提出了一种新颖的、专为资源受限边缘设备定制的硬件感知图神经网络架构搜索框架,即HGNAS。为实现硬件感知,HGNAS集成了一个高效的GNN硬件性能预测器,可在毫秒级内评估GNN的延迟和峰值内存使用量。同时,我们研究了GNN在推理过程中的内存使用情况,并提出了一种峰值内存估计方法,结合预测器结果增强了架构评估的鲁棒性。此外,HGNAS通过解耦GNN范式,构建了一个细粒度的设计空间,以支持探索极致性能的架构。另外,框架利用了多阶段分层搜索策略来促进对海量候选架构的导航,从而可将单次搜索时间减少至数个GPU小时。据我们所知,HGNAS是首个面向边缘设备的自动化GNN设计框架,也是首个实现跨平台GNN硬件感知的工作。在不同应用和边缘设备上进行的大量实验证明了HGNAS的优越性。与DGCNN在ModelNet40上的结果相比,HGNAS在精度损失可忽略不计的情况下,最高可实现10.6倍的加速和82.5%的峰值内存降低。