Learning unsupervised representations that are both semantically meaningful and stable across runs remains a central challenge in modern representation learning. We introduce entropy-ordered flows (EOFlows), a normalizing-flow framework that orders latent dimensions by their explained entropy, analogously to PCA's explained variance. This ordering enables adaptive injective flows: after training, one may retain only the top C latent variables to form a compact core representation while the remaining variables capture fine-grained detail and noise, with C chosen flexibly at inference time rather than fixed during training. EOFlows build on insights from Independent Mechanism Analysis, Principal Component Flows and Manifold Entropic Metrics. We combine likelihood-based training with local Jacobian regularization and noise augmentation into a method that scales well to high-dimensional data such as images. Experiments on the CelebA dataset show that our method uncovers a rich set of semantically interpretable features, allowing for high compression and strong denoising.
翻译:学习既具有语义意义又在多次运行中保持稳定的无监督表示,仍然是现代表示学习中的一个核心挑战。我们引入了熵排序流(EOFlows),这是一种归一化流框架,它通过潜在维度所解释的熵来对它们进行排序,类似于主成分分析(PCA)中的解释方差。这种排序实现了自适应单射流:训练完成后,可以仅保留前C个潜在变量以形成一个紧凑的核心表示,而其余变量则捕获细粒度的细节和噪声,其中C可以在推理时灵活选择,而非在训练期间固定。EOFlows建立在独立机制分析、主成分流和流形熵度量等工作的见解之上。我们将基于似然的训练与局部雅可比正则化和噪声增强相结合,形成了一种能够很好地扩展到高维数据(如图像)的方法。在CelebA数据集上的实验表明,我们的方法揭示了一组丰富的语义可解释特征,实现了高压缩率和强去噪能力。