In many machine learning systems that jointly learn from multiple modalities, a core research question is to understand the nature of multimodal interactions: how modalities combine to provide new task-relevant information that was not present in either alone. We study this challenge of interaction quantification in a semi-supervised setting with only labeled unimodal data and naturally co-occurring multimodal data (e.g., unlabeled images and captions, video and corresponding audio) but when labeling them is time-consuming. Using a precise information-theoretic definition of interactions, our key contribution is the derivation of lower and upper bounds to quantify the amount of multimodal interactions in this semi-supervised setting. We propose two lower bounds: one based on the shared information between modalities and the other based on disagreement between separately trained unimodal classifiers, and derive an upper bound through connections to approximate algorithms for min-entropy couplings. We validate these estimated bounds and show how they accurately track true interactions. Finally, we show how these theoretical results can be used to estimate multimodal model performance, guide data collection, and select appropriate multimodal models for various tasks.
翻译:在许多联合学习多模态数据的机器学习系统中,核心研究问题在于理解多模态交互的本质:各模态如何组合以提供单一模态所不具备的、与任务相关的新信息。我们研究在仅有标注单模态数据和自然共现的多模态数据(例如未标注的图像与文本描述、视频与对应音频)的半监督场景中,对这种交互进行量化所面临的挑战。基于交互的精确定义(信息论框架),我们的核心贡献是推导出在该半监督设置中量化多模态交互量的下界与上界。我们提出两种下界:一种基于模态间的共享信息,另一种基于单独训练的单模态分类器之间的分歧;并通过与最小熵耦合的近似算法建立联系,推导出上界。我们验证了这些估计边界,并展示其如何精确追踪真实交互。最后,我们展示了如何利用这些理论结果来估计多模态模型性能、指导数据收集,以及为不同任务选择合适的多模态模型。