Generative AI (GenAI), which aims to synthesize realistic and diverse data samples from latent variables or other data modalities, has achieved remarkable results in various domains, such as natural language, images, audio, and graphs. However, they also pose challenges and risks to data privacy, security, and ethics. Machine unlearning is the process of removing or weakening the influence of specific data samples or features from a trained model, without affecting its performance on other data or tasks. While machine unlearning has shown significant efficacy in traditional machine learning tasks, it is still unclear if it could help GenAI become safer and aligned with human desire. To this end, this position paper provides an in-depth discussion of the machine unlearning approaches for GenAI. Firstly, we formulate the problem of machine unlearning tasks on GenAI and introduce the background. Subsequently, we systematically examine the limitations of machine unlearning on GenAI models by focusing on the two representative branches: LLMs and image generative (diffusion) models. Finally, we provide our prospects mainly from three aspects: benchmark, evaluation metrics, and utility-unlearning trade-off, and conscientiously advocate for the future development of this field.
翻译:生成式人工智能(GenAI)旨在从隐变量或其他数据模态中合成真实且多样化的数据样本,已在自然语言、图像、音频和图结构等多个领域取得显著成果。然而,它们也给数据隐私、安全与伦理带来了挑战和风险。机器遗忘是指从已训练模型中移除或削弱特定数据样本或特征的影响,同时不影响其在其他数据或任务上的性能。尽管机器遗忘在传统机器学习任务中已展现出显著效果,但其是否能帮助生成式人工智能变得更安全并符合人类期望,目前尚不明确。为此,本立场论文深入探讨了生成式人工智能中的机器遗忘方法。首先,我们形式化了生成式人工智能中的机器遗忘任务问题并介绍了相关背景。随后,我们通过聚焦于两大代表性分支——大语言模型与图像生成(扩散)模型,系统性地审视了机器遗忘在生成式人工智能模型上的局限性。最后,我们主要从基准测试、评估指标以及效用-遗忘权衡三个方面提出了前景展望,并审慎倡导该领域的未来发展。