We introduce Graph-Induced Sum-Product Networks (GSPNs), a new probabilistic framework for graph representation learning that can tractably answer probabilistic queries. Inspired by the computational trees induced by vertices in the context of message-passing neural networks, we build hierarchies of sum-product networks (SPNs) where the parameters of a parent SPN are learnable transformations of the a-posterior mixing probabilities of its children's sum units. Due to weight sharing and the tree-shaped computation graphs of GSPNs, we obtain the efficiency and efficacy of deep graph networks with the additional advantages of a probabilistic model. We show the model's competitiveness on scarce supervision scenarios, under missing data, and for graph classification in comparison to popular neural models. We complement the experiments with qualitative analyses on hyper-parameters and the model's ability to answer probabilistic queries.
翻译:我们提出了图诱导和积网络(GSPNs),这是一种用于图表示学习的全新概率框架,能够以可计算的方式回答概率查询。受消息传递神经网络中顶点所诱导的计算树启发,我们构建了层次化的和积网络(SPNs),其中父SPN的参数是其子SPN求和单元后验混合概率的可学习变换。由于GSPN的权重共享与树形计算图结构,我们不仅获得了深度图网络的高效性与有效性,还额外获得了概率模型的优势。我们展示了该模型在标注稀缺、数据缺失场景以及图分类任务中相较于主流神经模型的竞争力。此外,我们通过超参数定性分析和模型回答概率查询的能力验证,对实验进行了补充。