Continual learning focuses on incrementally training a model on a sequence of tasks with the aim of learning new tasks while minimizing performance drop on previous tasks. Existing approaches at the intersection of Continual Learning and Visual Question Answering (VQA) do not study how the multimodal nature of the input affects the learning dynamics of a model. In this paper, we demonstrate that each modality evolves at different rates across a continuum of tasks and that this behavior occurs in established encoder-only models as well as modern recipes for developing Vision & Language (VL) models. Motivated by this observation, we propose a modality-aware feature distillation (MAFED) approach which outperforms existing baselines across models of varying scale in three multimodal continual learning settings. Furthermore, we provide ablations showcasing that modality-aware distillation complements experience replay. Overall, our results emphasize the importance of addressing modality-specific dynamics to prevent forgetting in multimodal continual learning.
翻译:持续学习旨在通过一系列任务逐步训练模型,以学习新任务的同时最小化对先前任务的性能下降。现有持续学习与视觉问答(VQA)交叉领域的研究尚未深入探讨输入的多模态特性如何影响模型的学习动态。本文通过实验证明,在连续任务序列中,不同模态的特征演化速率存在显著差异,且这一现象不仅存在于经典的编码器模型中,也出现在现代视觉与语言(VL)模型的构建方法中。基于此发现,我们提出一种模态感知特征蒸馏(MAFED)方法,该方法在三种多模态持续学习场景中,于不同规模的模型上均超越了现有基线方法。此外,消融实验表明模态感知蒸馏与经验回放方法具有互补增强效应。总体而言,我们的研究结果强调了在多模态持续学习中,针对模态特异性动态进行建模对于防止知识遗忘的重要性。