We develop parallel algorithms for simulating zeroth-order (aka gradient-free) Metropolis Markov chains based on the Picard map. For Random Walk Metropolis Markov chains targeting log-concave distributions $π$ on $\mathbb{R}^d$, our algorithm generates samples close to $π$ in $\mathcal{O}(\sqrt{d})$ parallel iterations with $\mathcal{O}(\sqrt{d})$ processors, therefore speeding up the convergence of the corresponding sequential implementation by a factor $\sqrt{d}$. Furthermore, a modification of our algorithm generates samples from an approximate measure $ π_r$ in $\mathcal{O}(1)$ parallel iterations and $\mathcal{O}(d)$ processors. We empirically assess the performance of the proposed algorithms in high-dimensional regression problems, an epidemic model where the gradient is unavailable and a real-word application in precision medicine. Our algorithms are straightforward to implement and may constitute a useful tool for practitioners seeking to sample from a prescribed distribution $π$ using only point-wise evaluations of $\logπ$ and parallel computing.
翻译:暂无翻译