Thanks to the explosive growth of data and the development of computational resources, it is possible to build pre-trained models that can achieve outstanding performance on various tasks, such as neural language processing, computer vision, and more. Despite their powerful capabilities, pre-trained models have also sparked attention to the emerging security challenges associated with their real-world applications. Security and privacy issues, such as leaking privacy information and generating harmful responses, have seriously undermined users' confidence in these powerful models. Concerns are growing as model performance improves dramatically. Researchers are eager to explore the unique security and privacy issues that have emerged, their distinguishing factors, and how to defend against them. However, the current literature lacks a clear taxonomy of emerging attacks and defenses for pre-trained models, which hinders a high-level and comprehensive understanding of these questions. To fill the gap, we conduct a systematical survey on the security risks of pre-trained models, proposing a taxonomy of attack and defense methods based on the accessibility of pre-trained models' input and weights in various security test scenarios. This taxonomy categorizes attacks and defenses into No-Change, Input-Change, and Model-Change approaches. With the taxonomy analysis, we capture the unique security and privacy issues of pre-trained models, categorizing and summarizing existing security issues based on their characteristics. In addition, we offer a timely and comprehensive review of each category's strengths and limitations. Our survey concludes by highlighting potential new research opportunities in the security and privacy of pre-trained models.
翻译:得益于数据的爆炸式增长和计算资源的发展,构建能够在下游任务(如自然语言处理、计算机视觉等)上取得卓越性能的预训练模型已成为可能。尽管预训练模型能力强大,其在实际应用中伴随的新兴安全挑战也引发了广泛关注。隐私信息泄露、生成有害响应等安全与隐私问题,严重削弱了用户对这些强大模型的信任。随着模型性能的显著提升,相关担忧日益加剧。研究者们迫切希望探索这些新出现的安全与隐私问题的独特性、其区别于传统问题的因素以及相应的防御方法。然而,现有文献缺乏对预训练模型新兴攻击与防御方法的清晰分类体系,这阻碍了对这些问题的高层次、全面理解。为填补这一空白,我们对预训练模型的安全风险进行了系统性综述,根据不同安全测试场景下对预训练模型输入和权重的可访问性,提出了一种攻击与防御方法的分类体系。该体系将攻击与防御方法划分为“无变更”、“输入变更”和“模型变更”三类。基于此分类分析,我们梳理了预训练模型特有的安全与隐私问题,并依据其特性对现有安全问题进行了分类与总结。此外,我们及时、全面地评述了各类方法的优势与局限。本综述最后指出了预训练模型安全与隐私领域潜在的新研究方向。