We present a one-fits-all programmatic approach to reason about a plethora of objectives on probabilistic programs. The first ingredient is to add a reward-statement to the language. We then define a program transformation applying a monotone function to the cumulative reward of the program. The key idea is that this transformation uses incremental differences in the reward. This simple, elegant approach enables to express e.g., higher moments, threshold probabilities of rewards, the expected excess over a budget, and moment-generating functions. All these objectives can now be analyzed using a single existing approach: probabilistic wp-reasoning. We automated verification using the Caesar deductive verifier and report on the application of the transformation to some examples.
翻译:我们提出了一种通用的程序化方法,用于推理概率程序上的多种目标。该方法首先通过在语言中添加奖励语句来实现。随后,我们定义了一种程序变换,该变换对程序的累积奖励应用单调函数。其核心思想在于该变换利用了奖励的增量差异。这种简洁而优雅的方法能够表达诸如高阶矩、奖励的阈值概率、超出预算的期望超额以及矩生成函数等多种目标。所有这些目标现在均可通过单一的现有方法——概率最弱前置条件推理——进行分析。我们利用Caesar演绎验证器实现了自动化验证,并报告了将该变换应用于若干示例的结果。