Stochastic constraints, which incorporate both deterministic parameters and random variables, extend classical deterministic constraints by explicitly accounting for uncertainty. These constraints are increasingly prevalent in data science, artificial intelligence, and bioinformatics; however, solving them requires addressing quantitative satisfaction problems that remain a significant challenge in computer science. In this paper, we propose a novel framework for deciding deterministic parameters that maximize the satisfaction probability. Our approach features a unique synergy between stochastic optimization and symbolic techniques: at the high level, it employs \emph{oracle-based stochastic gradient descent} to identify high-quality parameter candidates, while at the low level, it utilizes \emph{interval arithmetic} to compute rigorously certified lower bounds. This framework produces a sequence of sound and increasingly tight lower bounds for the true maximum satisfaction probability, supported by a high-probability convergence guarantee. We demonstrate the effectiveness and efficiency of our approach through its application to Stochastic Satisfiability Modulo Theories (SSMT) problems and a stochastic trajectory planning task.
翻译:暂无翻译