We present a new algorithm for amortized inference in sparse probabilistic graphical models (PGMs), which we call $\Delta$-amortized inference ($\Delta$-AI). Our approach is based on the observation that when the sampling of variables in a PGM is seen as a sequence of actions taken by an agent, sparsity of the PGM enables local credit assignment in the agent's policy learning objective. This yields a local constraint that can be turned into a local loss in the style of generative flow networks (GFlowNets) that enables off-policy training but avoids the need to instantiate all the random variables for each parameter update, thus speeding up training considerably. The $\Delta$-AI objective matches the conditional distribution of a variable given its Markov blanket in a tractable learned sampler, which has the structure of a Bayesian network, with the same conditional distribution under the target PGM. As such, the trained sampler recovers marginals and conditional distributions of interest and enables inference of partial subsets of variables. We illustrate $\Delta$-AI's effectiveness for sampling from synthetic PGMs and training latent variable models with sparse factor structure.
翻译:我们提出一种用于稀疏概率图模型(PGM)中摊销推断的新算法,称之为$\Delta$-摊销推断($\Delta$-AI)。该方法基于以下观察:当PGM中的变量采样被视作智能体执行的一系列动作时,模型的稀疏性使得智能体策略学习目标能够实现局部信用分配。这产生了一个局部约束,可转化为生成流网络(GFlowNets)风格的局部损失函数,从而支持离策略训练,但避免在每次参数更新时实例化全部随机变量,因此显著加速训练过程。$\Delta$-AI目标使得以贝叶斯网络结构为特征的可训练学习采样器中,给定变量马尔可夫毯的条件分布与目标PGM下的相应条件分布相匹配。因此,训练后的采样器能够恢复感兴趣的边际分布与条件分布,并实现对变量部分子集的推断。我们通过合成PGM采样及训练具有稀疏因子结构的隐变量模型,验证了$\Delta$-AI的有效性。