Flow-based methods have achieved significant success in various generative modeling tasks, capturing nuanced details within complex data distributions. However, few existing works have exploited this unique capability to resolve fine-grained structural details beyond generation tasks. This paper presents a flow-inspired framework for representation learning. First, we demonstrate that a rectified flow trained using independent coupling is zero everywhere at $t=0.5$ if and only if the source and target distributions are identical. We term this property the \emph{zero-flow criterion}. Second, we show that this criterion can certify conditional independence, thereby extracting \emph{sufficient information} from the data. Third, we translate this criterion into a tractable, simulation-free loss function that enables learning amortized Markov blankets in graphical models and latent representations in self-supervised learning tasks. Experiments on both simulated and real-world datasets demonstrate the effectiveness of our approach. The code reproducing our experiments can be found at: https://github.com/probabilityFLOW/zfe.
翻译:基于流的方法已在各种生成建模任务中取得显著成功,能够捕捉复杂数据分布中的细微细节。然而,现有研究很少利用这一独特能力来解决生成任务之外的细粒度结构细节问题。本文提出了一种受流启发的表示学习框架。首先,我们证明:当且仅当源分布与目标分布相同时,使用独立耦合训练的整流流在$t=0.5$处处处为零。我们将此性质称为\emph{零流准则}。其次,我们证明该准则可用于验证条件独立性,从而从数据中提取\emph{充分信息}。第三,我们将该准则转化为一种可处理的、无需模拟的损失函数,使其能够学习图模型中的摊销马尔可夫毯以及自监督学习任务中的潜在表示。在模拟和真实数据集上的实验证明了我们方法的有效性。重现实验的代码可见:https://github.com/probabilityFLOW/zfe。