Non-convex constrained optimizations are ubiquitous in robotic applications such as multi-agent navigation, UAV trajectory optimization, and soft robot simulation. For this problem class, conventional optimizers suffer from small step sizes and slow convergence. We propose BC-ADMM, a variant of Alternating Direction Method of Multiplier (ADMM), that can solve a class of non-convex constrained optimizations with biconvex constraint relaxation. Our algorithm allows larger step sizes by breaking the problem into small-scale sub-problems that can be easily solved in parallel. We show that our method has both theoretical convergence speed guarantees and practical convergence guarantees in the asymptotic sense. Through numerical experiments in a row of four robotic applications, we show that BC-ADMM has faster convergence than conventional gradient descent and Newton's method in terms of wall clock time.
翻译:非凸约束优化在机器人应用中无处不在,例如多智能体导航、无人机轨迹优化以及软体机器人仿真。针对此类问题,传统优化器存在步长小、收敛慢的缺点。本文提出BC-ADMM,作为交替方向乘子法(ADMM)的一种变体,能够通过双凸约束松弛求解一类非凸约束优化问题。该算法通过将原问题分解为可并行求解的小规模子问题,从而允许采用更大的步长。我们证明,该方法在渐近意义上同时具备理论收敛速度保证与实践收敛保证。通过在一系列四种机器人应用中的数值实验,我们表明BC-ADMM在挂钟时间上比传统梯度下降法和牛顿法具有更快的收敛速度。