Cross-modality distillation arises as an important topic for data modalities containing limited knowledge such as depth maps and high-quality sketches. Such techniques are of great importance, especially for memory and privacy-restricted scenarios where labeled training data is generally unavailable. To solve the problem, existing label-free methods leverage a few pairwise unlabeled data to distill the knowledge by aligning features or statistics between the source and target modalities. For instance, one typically aims to minimize the L2 distance or contrastive loss between the learned features of pairs of samples in the source (e.g. image) and the target (e.g. sketch) modalities. However, most algorithms in this domain only focus on the experimental results but lack theoretical insight. To bridge the gap between the theory and practical method of cross-modality distillation, we first formulate a general framework of cross-modality contrastive distillation (CMCD), built upon contrastive learning that leverages both positive and negative correspondence, towards a better distillation of generalizable features. Furthermore, we establish a thorough convergence analysis that reveals that the distance between source and target modalities significantly impacts the test error on downstream tasks within the target modality which is also validated by the empirical results. Extensive experimental results show that our algorithm outperforms existing algorithms consistently by a margin of 2-3\% across diverse modalities and tasks, covering modalities of image, sketch, depth map, and audio and tasks of recognition and segmentation.
翻译:跨模态蒸馏已成为包含有限知识的数据模态(如深度图和高品质草图)的重要研究课题。此类技术具有重大意义,尤其适用于标注训练数据通常不可用的内存受限及隐私受限场景。为解决该问题,现有无标签方法利用少量成对未标注数据,通过对齐源模态与目标模态间的特征或统计量实现知识蒸馏。例如,典型做法是最小化源模态(如图像)与目标模态(如草图)中样本对学习特征的L2距离或对比损失。然而,该领域多数算法仅关注实验结果而缺乏理论洞见。为弥合跨模态蒸馏理论与实践方法间的鸿沟,我们首先构建了跨模态对比蒸馏(CMCD)的通用框架,该框架基于同时利用正向与负向对应关系的对比学习,旨在实现更优的可泛化特征蒸馏。进一步地,我们建立了完整的收敛性分析,揭示源模态与目标模态间的距离会显著影响目标模态下游任务的测试误差,该结论亦得到实证结果验证。大量实验表明,我们的算法在多种模态(涵盖图像、草图、深度图及音频)与任务(包含识别与分割)中,始终以2%-3%的幅度优于现有算法。