This paper introduces a novel data-free model extraction attack that significantly advances the current state-of-the-art in terms of efficiency, accuracy, and effectiveness. Traditional black-box methods rely on using the victim's model as an oracle to label a vast number of samples within high-confidence areas. This approach not only requires an extensive number of queries but also results in a less accurate and less transferable model. In contrast, our method innovates by focusing on sampling low-confidence areas (along the decision boundaries) and employing an evolutionary algorithm to optimize the sampling process. These novel contributions allow for a dramatic reduction in the number of queries needed by the attacker by a factor of 10x to 600x while simultaneously improving the accuracy of the stolen model. Moreover, our approach improves boundary alignment, resulting in better transferability of adversarial examples from the stolen model to the victim's model (increasing the attack success rate from 60\% to 82\% on average). Finally, we accomplish all of this with a strict black-box assumption on the victim, with no knowledge of the target's architecture or dataset. We demonstrate our attack on three datasets with increasingly larger resolutions and compare our performance to four state-of-the-art model extraction attacks.
翻译:本文提出了一种新型的无数据模型窃取攻击方法,在效率、精度和有效性方面显著推进了当前技术水平。传统的黑盒方法依赖于将受害者模型作为预言机,对高置信度区域内的海量样本进行标注。这种方法不仅需要大量查询,还会导致窃取模型的精度较低且可迁移性差。相比之下,我们的方法通过聚焦于低置信度区域(沿决策边界)采样,并采用进化算法优化采样过程,实现了创新性突破。这些新颖贡献使得攻击者所需查询量大幅减少10至600倍,同时提升了窃取模型的精度。此外,我们的方法改善了边界对齐效果,使得对抗样本从窃取模型到受害者模型的迁移性更优(平均攻击成功率从60%提升至82%)。最终,我们在对受害者模型仅作严格黑盒假设(不了解目标架构或数据集)的前提下实现了所有目标。我们在三个分辨率递增的数据集上验证了攻击效果,并与四种前沿模型窃取方法进行了性能对比。