Training generative models that capture rich semantics of the data and interpreting the latent representations encoded by such models are very important problems in un-/self-supervised learning. In this work, we provide a simple algorithm that relies on perturbation experiments on latent codes of a pre-trained generative autoencoder to uncover an attribute graph that is implied by the generative model. We perform perturbation experiments to check for influence of a given latent variable on a subset of attributes. Given this, we show that one can fit an effective graphical model that models a structural equation model between latent codes taken as exogenous variables and attributes taken as observed variables. One interesting aspect is that a single latent variable controls multiple overlapping subsets of attributes unlike conventional approaches that try to impose full independence. Using a pre-trained generative autoencoder trained on a large dataset of small molecules, we demonstrate that the graphical model between various molecular attributes and latent codes learned by our algorithm can be used to predict a specific property for molecules which are drawn from a different distribution. We compare prediction models trained on various feature subsets chosen by simple baselines, as well as existing causal discovery and sparse learning/feature selection methods, with the ones in the derived Markov blanket from our method. Results show empirically that the predictor that relies on our Markov blanket attributes is robust to distribution shifts when transferred or fine-tuned with a few samples from the new distribution, especially when training data is limited.
翻译:训练能够捕捉数据丰富语义的生成模型,并解释此类模型编码的潜在表示,是无监督/自监督学习中至关重要的问题。在本研究中,我们提出了一种简单算法,该算法通过对预训练生成自编码器的潜在编码进行扰动实验,揭示生成模型所隐含的属性图。我们通过扰动实验检验给定潜在变量对属性子集的影响。基于此,我们证明可以拟合一个有效的图模型,该模型将潜在编码作为外生变量、属性作为观测变量,构建其间的结构方程模型。一个有趣的特点是,单个潜在变量控制着多个重叠的属性子集,这与传统方法试图强加完全独立性不同。通过在小型分子大型数据集上预训练的生成自编码器,我们证明了算法学习到的分子属性与潜在编码之间的图模型,可用于预测来自不同分布的分子特性。我们将基于简单基线方法选定的特征子集、现有因果发现方法以及稀疏学习/特征选择方法训练得到的预测模型,与从本方法推导出的马尔可夫毯中的预测模型进行比较。实证结果表明,依赖我们马尔可夫毯属性的预测器在迁移或使用新分布中的少量样本进行微调时,对分布偏移具有鲁棒性,尤其在训练数据有限的情况下。