Designing expressive generative models that support exact and efficient inference is a core question in probabilistic ML. Probabilistic circuits (PCs) offer a framework where this tractability-vs-expressiveness trade-off can be analyzed theoretically. Recently, squared PCs encoding subtractive mixtures via negative parameters have emerged as tractable models that can be exponentially more expressive than monotonic PCs, i.e., PCs with positive parameters only. In this paper, we provide a more precise theoretical characterization of the expressiveness relationships among these models. First, we prove that squared PCs can be less expressive than monotonic ones. Second, we formalize a novel class of PCs -- sum of squares PCs -- that can be exponentially more expressive than both squared and monotonic PCs. Around sum of squares PCs, we build an expressiveness hierarchy that allows us to precisely unify and separate different tractable model classes such as Born Machines and PSD models, and other recently introduced tractable probabilistic models by using complex parameters. Finally, we empirically show the effectiveness of sum of squares circuits in performing distribution estimation.
翻译:设计支持精确高效推理的表达性生成模型是概率机器学习中的一个核心问题。概率电路(PCs)提供了一个理论框架,可用于分析这种可处理性与表达性之间的权衡。最近,通过负参数编码减性混合的平方PCs已成为一类可处理模型,其表达性可指数级超越仅含正参数的单调PCs。本文对这些模型之间的表达性关系提供了更精确的理论刻画。首先,我们证明平方PCs的表达性可能弱于单调PCs。其次,我们形式化了一类新型PCs——平方和PCs——其表达性可指数级超越平方PCs与单调PCs。围绕平方和PCs,我们构建了一个表达性层次结构,通过引入复数参数,能够精确统一和区分不同可处理模型类别(如Born Machines和PSD模型)以及其他近期提出的可处理概率模型。最后,我们通过实验验证了平方和电路在分布估计任务中的有效性。