In 1991, Brenier proved a theorem that generalizes the polar decomposition for square matrices -- factored as PSD $\times$ unitary -- to any vector field $F:\mathbb{R}^d\rightarrow \mathbb{R}^d$. The theorem, known as the polar factorization theorem, states that any field $F$ can be recovered as the composition of the gradient of a convex function $u$ with a measure-preserving map $M$, namely $F=\nabla u \circ M$. We propose a practical implementation of this far-reaching theoretical result, and explore possible uses within machine learning. The theorem is closely related to optimal transport (OT) theory, and we borrow from recent advances in the field of neural optimal transport to parameterize the potential $u$ as an input convex neural network. The map $M$ can be either evaluated pointwise using $u^*$, the convex conjugate of $u$, through the identity $M=\nabla u^* \circ F$, or learned as an auxiliary network. Because $M$ is, in general, not injective, we consider the additional task of estimating the ill-posed inverse map that can approximate the pre-image measure $M^{-1}$ using a stochastic generator. We illustrate possible applications of Brenier's polar factorization to non-convex optimization problems, as well as sampling of densities that are not log-concave.
翻译:1991年,Brenier证明了一个定理,将方阵的极分解(分解为PSD矩阵与酉矩阵的乘积)推广至任意向量场$F:\mathbb{R}^d\rightarrow \mathbb{R}^d$。该定理被称为极分解定理,它表明任何场$F$均可表示为凸函数$u$的梯度与保测映射$M$的复合,即$F=\nabla u \circ M$。我们针对这一影响深远的理论结果提出了实际实现方案,并探索其在机器学习中的潜在应用。该定理与最优传输理论密切相关,我们借鉴神经最优传输领域的最新进展,将势函数$u$参数化为输入凸神经网络。映射$M$既可通过恒等式$M=\nabla u^* \circ F$利用$u$的凸共轭函数$u^*$逐点计算,也可作为辅助网络进行学习。由于$M$通常非单射,我们进一步考虑估计病态逆映射的任务:通过随机生成器逼近原像测度$M^{-1}$。我们展示了Brenier极分解在非凸优化问题以及非对数凹分布采样中的可能应用。