Eldan's stochastic localization is a probabilistic construction that has proved instrumental to modern breakthroughs in high-dimensional geometry and the design of sampling algorithms. Motivated by sampling under non-Euclidean geometries and the mirror descent algorithm in optimization, we develop a functional generalization of Eldan's process that replaces Gaussian regularization with regularization by any positive integer multiple of a log-Laplace transform. We further give a mixing time bound on the Markov chain induced by our localization process, which holds if our target distribution satisfies a functional Poincaré inequality. Finally, we apply our framework to differentially private convex optimization in $\ell_p$ norms for $p \in [1, 2)$, where we improve state-of-the-art query complexities in a zeroth-order model.
翻译:埃尔丹的随机定位是一种概率构造方法,在现代高维几何学的突破性进展与采样算法设计中发挥了关键作用。受非欧几里得几何下的采样问题以及优化中镜像下降算法的启发,我们发展了一种埃尔丹过程的函数式推广,该推广将高斯正则化替换为任意正整数倍的对数拉普拉斯变换的正则化。我们进一步给出了由我们的定位过程所诱导的马尔可夫链的混合时间界,该界在目标分布满足函数式庞加莱不等式时成立。最后,我们将我们的框架应用于 $\ell_p$ 范数($p \in [1, 2)$)下的差分隐私凸优化问题,在零阶模型中改进了当前最优的查询复杂度。