In 1991, Brenier proved a theorem that generalizes the polar decomposition for square matrices -- factored as PSD $\times$ unitary -- to any vector field $F:\mathbb{R}^d\rightarrow \mathbb{R}^d$. The theorem, known as the polar factorization theorem, states that any field $F$ can be recovered as the composition of the gradient of a convex function $u$ with a measure-preserving map $M$, namely $F=\nabla u \circ M$. We propose a practical implementation of this far-reaching theoretical result, and explore possible uses within machine learning. The theorem is closely related to optimal transport (OT) theory, and we borrow from recent advances in the field of neural optimal transport to parameterize the potential $u$ as an input convex neural network. The map $M$ can be either evaluated pointwise using $u^*$, the convex conjugate of $u$, through the identity $M=\nabla u^* \circ F$, or learned as an auxiliary network. Because $M$ is, in general, not injective, we consider the additional task of estimating the ill-posed inverse map that can approximate the pre-image measure $M^{-1}$ using a stochastic generator. We illustrate possible applications of Brenier's polar factorization to non-convex optimization problems, as well as sampling of densities that are not log-concave.
翻译:1991年,布雷尼耶证明了一个定理,将方阵的极分解(分解为半正定矩阵乘以酉矩阵)推广到任意向量场$F:\mathbb{R}^d\rightarrow \mathbb{R}^d$。该定理被称为极分解定理,指出任意向量场$F$可恢复为凸函数$u$的梯度与保测度映射$M$的复合,即$F=\nabla u \circ M$。我们提出了这一深远理论结果的实用实现方法,并探索其在机器学习中的潜在应用。该定理与最优传输理论密切相关,我们借鉴神经最优传输领域的最新进展,将势函数$u$参数化为输入凸神经网络。映射$M$可通过凸共轭函数$u^*$逐点计算(利用恒等式$M=\nabla u^* \circ F$),或作为辅助网络学习得到。由于$M$通常非单射,我们进一步考虑利用随机生成器估计病态逆映射以近似原像测度$M^{-1}$的附加任务。我们展示了布雷尼耶极分解在非凸优化问题以及非对数凹密度采样中的可能应用。