We investigate the effectiveness of Explainable AI (XAI) in verifying Machine Unlearning (MU) within the context of harbor front monitoring, focusing on data privacy and regulatory compliance. With the increasing need to adhere to privacy legislation such as the General Data Protection Regulation (GDPR), traditional methods of retraining ML models for data deletions prove impractical due to their complexity and resource demands. MU offers a solution by enabling models to selectively forget specific learned patterns without full retraining. We explore various removal techniques, including data relabeling, and model perturbation. Then, we leverage attribution-based XAI to discuss the effects of unlearning on model performance. Our proof-of-concept introduces feature importance as an innovative verification step for MU, expanding beyond traditional metrics and demonstrating techniques' ability to reduce reliance on undesired patterns. Additionally, we propose two novel XAI-based metrics, Heatmap Coverage (HC) and Attention Shift (AS), to evaluate the effectiveness of these methods. This approach not only highlights how XAI can complement MU by providing effective verification, but also sets the stage for future research to enhance their joint integration.
翻译:本研究探讨了可解释人工智能在港口前沿监控场景下验证机器遗忘的有效性,重点关注数据隐私与法规遵从问题。随着对《通用数据保护条例》等隐私立法的遵从需求日益增长,传统通过完全重训机器学习模型实现数据删除的方法因复杂性和资源需求过高而显得不切实际。机器遗忘通过使模型能够选择性地遗忘特定学习模式而无需完全重训,为此提供了解决方案。我们探索了多种移除技术,包括数据重标注和模型扰动,并利用基于归因的可解释人工智能方法分析了遗忘操作对模型性能的影响。我们的概念验证首次引入特征重要性作为机器遗忘的创新验证步骤,突破了传统评估指标的局限,证明了该技术能够有效降低模型对非期望模式的依赖。此外,我们提出了两个基于可解释人工智能的新型评估指标——热力图覆盖率和注意力偏移量,用以量化这些方法的有效性。该方法不仅揭示了可解释人工智能如何通过提供有效验证来增强机器遗忘,也为未来深化两者融合的研究奠定了基础。