The problem of multi-task regression over graph nodes has been recently approached through Graph-Instructed Neural Network (GINN), which is a promising architecture belonging to the subset of message-passing graph neural networks. In this work, we discuss the limitations of the Graph-Instructed (GI) layer, and we formalize a novel edge-wise GI (EWGI) layer. We discuss the advantages of the EWGI layer and we provide numerical evidence that EWGINNs perform better than GINNs over some graph-structured input data, like the ones inferred from the Barabasi-Albert graph, and improve the training regularization on graphs with chaotic connectivity, like the ones inferred from the Erdos-Renyi graph.
翻译:近年来,多任务图节点回归问题已通过图指导神经网络(GINN)得到解决,这是一种属于消息传递图神经网络子集的有前景的架构。本文讨论了图指导(GI)层的局限性,并形式化了一种新颖的基于边的GI(EWGI)层。我们阐述了EWGI层的优势,并提供了数值证据表明:在某些图结构输入数据(如从Barabasi-Albert图推断的数据)上,EWGINN的性能优于GINN;在具有混沌连通性的图(如从Erdos-Renyi图推断的数据)上,EWGINN能改善训练正则化效果。