Latent space models for network data characterize each node through a vector of latent features whose pairwise similarities define the edge probabilities among the pairs of nodes. Although this formulation has led to successful implementations, the overarching focus has been on directly inferring node embeddings through the latent features, rather than learning the generative process underlying these embeddings. This focus prevents borrowing information across the node features and limits the ability to infer higher-level architectures governing network formation. For example, routinely-studied networks often exhibit multiscale structures informing on nested modular hierarchies among nodes, which could be learned via tree-based representations of dependencies among the latent features. We pursue this direction by bridging latent variable representations of network data with concepts from phylogenetic inference to design a novel latent space model that explicitly characterizes the generative process of the node feature vectors through a branching Brownian motion, with branching structure parametrized by a tree. This tree constitutes the main object of interest and is learned under a Bayesian perspective leveraging priors inherited from phylogenetic literature to infer tree-based modular hierarchies across nodes, which explain heterogeneous multiscale patterns in the network. Identifiability results are derived along with posterior consistency theory. The inference potentials of our model are illustrated in simulations and two real-data applications from criminology and neuroscience, where our formulation learns core structures hidden to state-of-the-art alternatives.
翻译:暂无翻译