We present DeGAS, a differentiable Gaussian approximate semantics for loopless probabilistic programs that enables sample-free, gradient-based optimization in models with both continuous and discrete components. DeGAS evaluates programs under a Gaussian-mixture semantics and replaces measure-zero predicates and discrete branches with a vanishing smoothing, yielding closed-form expressions for posterior and path probabilities. We prove differentiability of these quantities with respect to program parameters, enabling end-to-end optimization via standard automatic differentiation, without Monte Carlo estimators. On thirteen benchmark programs, DeGAS achieves accuracy and runtime competitive with variational inference and MCMC. Importantly, it reliably tackles optimization problems where sampling-based baselines fail to converge due to conditioning involving continuous variables.
翻译:本文提出DeGAS,一种针对无循环概率程序的可微分高斯近似语义,能够在包含连续与离散组件的模型中实现无需采样的梯度优化。DeGAS在高斯混合语义下评估程序,通过渐近平滑技术替代测度零谓词与离散分支,从而得到后验概率与路径概率的闭式表达式。我们证明了这些量对程序参数的可微性,使得通过标准自动微分实现端到端优化成为可能,且无需蒙特卡洛估计器。在十三个基准程序上的实验表明,DeGAS在精度与运行时间上均达到与变分推断和MCMC相当的水平。尤为重要的是,该方法能可靠解决因涉及连续变量的条件约束而导致基于采样的基线方法无法收敛的优化问题。