This paradigm encapsulates knowledge from various models into a solitary prompt without altering the original models or requiring access to the training data, which enables us to achieve efficient and convenient knowledge transfer in more realistic scenarios. From a practicality standpoint, this paradigm not only for the first time proves the effectiveness of Visual Prompt in data inaccessible contexts, but also solves the problems of low model reusability and high storage resource consumption faced by traditional Data-Free Knowledge Transfer, which means that we can realize the parallel knowledge transfer of multiple models without modifying any source model. Extensive experiments across various datasets and models demonstrate the efficacy of the proposed KiOP knowledge transfer paradigm. Without access to real training data and with rigorous storage capacity constraints, it is also capable of yielding considerable outcomes when dealing with cross-model backbone setups and handling parallel knowledge transfer processing requests with multiple (more than 2) models.
翻译:该范式将来自不同模型的知识封装于单一提示符中,既无需修改原始模型,也无需访问训练数据,从而能够在更现实的场景中实现高效便捷的知识迁移。从实用性角度看,该范式不仅首次证明了视觉提示符在数据不可访问场景下的有效性,还解决了传统无数据知识迁移所面临的模型复用率低与存储资源消耗高的问题,这意味着我们能够在无需修改任何源模型的情况下实现多模型的并行知识迁移。跨多个数据集与模型的广泛实验验证了所提出的KiOP知识迁移范式的有效性。在无法访问真实训练数据且存在严格存储容量限制的条件下,该方法在处理跨模型骨干架构配置以及应对多模型(超过2个)并行知识迁移处理请求时,仍能取得显著成效。