Object detection models, widely used in security-critical applications, are vulnerable to backdoor attacks that cause targeted misclassifications when triggered by specific patterns. Existing backdoor defense techniques, primarily designed for simpler models like image classifiers, often fail to effectively detect and remove backdoors in object detectors. We propose a backdoor defense framework tailored to object detection models, based on the observation that backdoor attacks cause significant inconsistencies between local modules' behaviors, such as the Region Proposal Network (RPN) and classification head. By quantifying and analyzing these inconsistencies, we develop an algorithm to detect backdoors. We find that the inconsistent module is usually the main source of backdoor behavior, leading to a removal method that localizes the affected module, resets its parameters, and fine-tunes the model on a small clean dataset. Extensive experiments with state-of-the-art two-stage object detectors show our method achieves a 90% improvement in backdoor removal rate over fine-tuning baselines, while limiting clean data accuracy loss to less than 4%. To the best of our knowledge, this work presents the first approach that addresses both the detection and removal of backdoors in two-stage object detection models, advancing the field of securing these complex systems against backdoor attacks.
翻译:目标检测模型广泛应用于安全关键领域,但其易受后门攻击影响,在特定触发模式出现时会导致定向误分类。现有的后门防御技术主要针对图像分类器等简单模型设计,往往无法有效检测并移除目标检测器中的后门。基于后门攻击会导致局部模块(如区域提议网络与分类头)行为显著不一致的观察,我们提出一种专为目标检测模型定制的后门防御框架。通过量化分析这些不一致性,我们开发了后门检测算法。研究发现不一致模块通常是后门行为的主要来源,据此提出一种定位受影响模块、重置其参数并在小型干净数据集上微调模型的移除方法。在先进两阶段目标检测器上的大量实验表明,本方法相比微调基线实现了90%的后门移除率提升,同时将干净数据准确率损失控制在4%以内。据我们所知,本研究首次提出了同时解决两阶段目标检测模型中后门检测与移除问题的方法,推动了此类复杂系统防御后门攻击领域的发展。