Multimodal representation learning seeks to relate and decompose information inherent in multiple modalities. By disentangling modality-specific information from information that is shared across modalities, we can improve interpretability and robustness and enable downstream tasks such as the generation of counterfactual outcomes. Separating the two types of information is challenging since they are often deeply entangled in many real-world applications. We propose Disentangled Self-Supervised Learning (DisentangledSSL), a novel self-supervised approach for learning disentangled representations. We present a comprehensive analysis of the optimality of each disentangled representation, particularly focusing on the scenario not covered in prior work where the so-called Minimum Necessary Information (MNI) point is not attainable. We demonstrate that DisentangledSSL successfully learns shared and modality-specific features on multiple synthetic and real-world datasets and consistently outperforms baselines on various downstream tasks, including prediction tasks for vision-language data, as well as molecule-phenotype retrieval tasks for biological data. The code is available at https://github.com/uhlerlab/DisentangledSSL.
翻译:多模态表示学习旨在关联并分解多种模态中固有的信息。通过将模态特定信息与跨模态共享信息进行解耦,我们可以提升模型的可解释性与鲁棒性,并支持反事实结果生成等下游任务。由于这两类信息在现实应用中常深度纠缠,其分离具有挑战性。我们提出解耦自监督学习(DisentangledSSL),一种学习解耦表示的新型自监督方法。我们对各解耦表示的最优性进行了全面分析,特别关注了先前工作中未涵盖的场景——即所谓的最小必要信息(MNI)点无法达到的情况。实验表明,DisentangledSSL在多个合成与真实数据集上成功学习了共享特征与模态特定特征,并在各类下游任务中持续优于基线方法,包括视觉-语言数据的预测任务以及生物数据的分子-表型检索任务。代码发布于 https://github.com/uhlerlab/DisentangledSSL。