In Bayesian optimization, a black-box function is maximized via the use of a surrogate model. We apply distributed Thompson sampling, using a Gaussian process as a surrogate model, to approach the multi-agent Bayesian optimization problem. In our distributed Thompson sampling implementation, each agent receives sampled points from neighbors, where the communication network is encoded in a graph; each agent utilizes a Gaussian process to model the objective function. We demonstrate a theoretical bound on Bayesian Simple Regret, where the bound depends on the size of the largest complete subgraph of the communication graph. Unlike in batch Bayesian optimization, this bound is applicable in cases where the communication graph amongst agents is constrained. When compared to sequential Thompson sampling, our bound guarantees faster convergence with respect to time as long as there is a fully connected subgraph of at least two agents. We confirm the efficacy of our algorithm with numerical simulations on traditional optimization test functions, illustrating the significance of graph connectivity on improving regret convergence.
翻译:在贝叶斯优化中,黑箱函数通过代理模型实现最大化。我们采用分布式Thompson采样方法,以高斯过程作为代理模型,以解决多智能体贝叶斯优化问题。在我们的分布式Thompson采样实现中,每个智能体从邻居节点接收采样点,其中通信网络以图结构编码;每个智能体利用高斯过程对目标函数进行建模。我们证明了贝叶斯简单遗憾的理论上界,该上界取决于通信图中最大完全子图的规模。与批量贝叶斯优化不同,该上界适用于智能体间通信图受约束的情况。与顺序Thompson采样相比,只要存在至少包含两个智能体的全连接子图,我们的上界就能保证更快的时序收敛速度。我们通过在传统优化测试函数上的数值模拟验证了算法的有效性,阐明了图连通性对改进遗憾收敛的重要作用。