Over the past decades, researchers have primarily focused on improving the generalization abilities of models, with limited attention given to regulating such generalization. However, the ability of models to generalize to unintended data (e.g., harmful or unauthorized data) can be exploited by malicious adversaries in unforeseen ways, potentially resulting in violations of model ethics. Non-transferable learning (NTL), a task aimed at reshaping the generalization abilities of deep learning models, was proposed to address these challenges. While numerous methods have been proposed in this field, a comprehensive review of existing progress and a thorough analysis of current limitations remain lacking. In this paper, we bridge this gap by presenting the first comprehensive survey on NTL and introducing NTLBench, the first benchmark to evaluate NTL performance and robustness within a unified framework. Specifically, we first introduce the task settings, general framework, and criteria of NTL, followed by a summary of NTL approaches. Furthermore, we emphasize the often-overlooked issue of robustness against various attacks that can destroy the non-transferable mechanism established by NTL. Experiments conducted via NTLBench verify the limitations of existing NTL methods in robustness. Finally, we discuss the practical applications of NTL, along with its future directions and associated challenges.
翻译:过去几十年间,研究者主要致力于提升模型的泛化能力,而对调控此类泛化的关注有限。然而,模型向非预期数据(如有害或未授权数据)的泛化能力可能被恶意攻击者以不可预见的方式利用,可能导致违反模型伦理。为应对这些挑战,非可迁移学习(NTL)——一项旨在重塑深度学习模型泛化能力的任务——被提出。尽管该领域已涌现大量方法,但仍缺乏对现有进展的系统性综述及对当前局限性的深入分析。本文通过首次提出全面的NTL综述,并引入首个在统一框架内评估NTL性能与鲁棒性的基准测试平台NTLBench,以填补这一空白。具体而言,我们首先介绍NTL的任务设定、通用框架与评价准则,随后系统梳理NTL方法。此外,我们着重探讨了常被忽视的鲁棒性问题——针对可能破坏NTL所建立的非可迁移机制的各种攻击。通过NTLBench开展的实验验证了现有NTL方法在鲁棒性方面的局限。最后,我们讨论了NTL的实际应用场景,并展望其未来发展方向与面临的挑战。