Graph-based representations for samples of computational mechanics-related datasets can prove instrumental when dealing with problems like irregular domains or molecular structures of materials, etc. To effectively analyze and process such datasets, deep learning offers Graph Neural Networks (GNNs) that utilize techniques like message-passing within their architecture. The issue, however, is that as the individual graph scales and/ or GNN architecture becomes increasingly complex, the increased energy budget of the overall deep learning model makes it unsustainable and restricts its applications in applications like edge computing. To overcome this, we propose in this paper Hybrid Variable Spiking Graph Neural Networks (HVS-GNNs) that utilize Variable Spiking Neurons (VSNs) within their architecture to promote sparse communication and hence reduce the overall energy budget. VSNs, while promoting sparse event-driven computations, also perform well for regression tasks, which are often encountered in computational mechanics applications and are the main target of this paper. Three examples dealing with prediction of mechanical properties of material based on microscale/ mesoscale structures are shown to test the performance of the proposed HVS-GNNs in regression tasks. We have also compared the performance of HVS-GNN architectures with the performance of vanilla GNNs and GNNs utilizing leaky integrate and fire neurons. The results produced show that HVS-GNNs perform well for regression tasks, all while promoting sparse communication and, hence, energy efficiency.
翻译:基于图的计算力学相关数据集样本表示在处理不规则域或材料分子结构等问题时具有重要作用。为有效分析和处理此类数据集,深度学习提供了图神经网络(GNNs),其架构采用消息传递等技术。然而,随着单个图规模的扩大和/或GNN架构日益复杂,深度学习模型整体能耗的增加使其不可持续,并限制了在边缘计算等场景中的应用。为解决这一问题,本文提出混合变量脉冲图神经网络(HVS-GNNs),其架构采用变量脉冲神经元(VSNs)以促进稀疏通信,从而降低整体能耗。VSNs在促进稀疏事件驱动计算的同时,对计算力学应用中常见的回归任务也表现良好,这正是本文的主要研究目标。本文通过三个基于微观/介观结构预测材料力学性能的算例,验证了所提HVS-GNNs在回归任务中的性能。我们还将HVS-GNN架构与原始GNNs及采用泄漏积分发放神经元的GNNs进行了性能对比。结果表明,HVS-GNNs在回归任务中表现优异,同时能促进稀疏通信,从而实现更高的能效。