Graph property prediction is drawing increasing attention in the recent years due to the fact that graphs are one of the most general data structures since they can contain an arbitrary number of nodes and connections between them, and it is the backbone for many different tasks like classification and regression on such kind of data (networks, molecules, knowledge bases, ...). We introduce a novel generalized global pooling layer to mitigate the information loss that typically occurs at the Readout phase in Message-Passing Neural Networks. This novel layer is parametrized by two values ($\beta$ and $p$) which can optionally be learned, and the transformation it performs can revert to several already popular readout functions (mean, max and sum) under certain settings, which can be specified. To showcase the superior expressiveness and performance of this novel technique, we test it in a popular graph property prediction task by taking the current best-performing architecture and using our readout layer as a drop-in replacement and we report new state of the art results. The code to reproduce the experiments can be accessed here: https://github.com/EricAlcaide/generalized-readout-phase
翻译:图属性预测近年来日益受到关注,因为图是最通用的一种数据结构,可以包含任意数量的节点及其连接关系,并且是处理此类数据(如网络、分子、知识库等)分类和回归等任务的基础。我们引入了一种新颖的泛化全局池化层,以减轻消息传递神经网络在读入阶段通常发生的的信息丢失问题。该新层由两个参数($\beta$和$p$)参数化,这些参数可以选择性地学习,并且其执行的变换可以在特定设置下恢复到几种已有的流行读入函数(均值、最大值和求和)。为了展示这种新技术卓越的表达能力和性能,我们通过选取当前性能最优的架构,并使用我们的读入层作为直接替换,在流行的图属性预测任务上进行测试,并报告了新的最优结果。重现实验的代码可在此处获取:https://github.com/EricAlcaide/generalized-readout-phase