Variance-Invariance-Covariance Regularization (VICReg) is a self-supervised learning (SSL) method that has shown promising results on a variety of tasks. However, the fundamental mechanisms underlying VICReg remain unexplored. In this paper, we present an information-theoretic perspective on the VICReg objective. We begin by deriving information-theoretic quantities for deterministic networks as an alternative to unrealistic stochastic network assumptions. We then relate the optimization of the VICReg objective to mutual information optimization, highlighting underlying assumptions and facilitating a constructive comparison with other SSL algorithms and derive a generalization bound for VICReg, revealing its inherent advantages for downstream tasks. Building on these results, we introduce a family of SSL methods derived from information-theoretic principles that outperform existing SSL techniques.
翻译:方差-不变性-协方差正则化(VICReg)是一种自监督学习方法,已在多种任务中展现出良好效果。然而,VICReg的基本机制尚未得到充分探究。本文从信息论视角对VICReg目标进行阐述。首先,我们针对确定性网络推导信息论量,以替代不切实际的随机网络假设。随后,我们将VICReg目标的优化与互信息优化相关联,揭示其潜在假设,并与其他自监督学习算法进行建设性对比,同时推导出VICReg的泛化界,揭示其在下游任务中的固有优势。基于这些结果,我们提出一系列源自信息论原理的新型自监督学习方法,其性能优于现有自监督学习技术。