As object detection models are increasingly deployed in cyber-physical systems such as autonomous vehicles (AVs) and surveillance platforms, ensuring their security against adversarial threats is essential. While prior work has explored adversarial attacks in the image domain, those attacks in the video domain remain largely unexamined, especially in the no-box setting. In this paper, we present {\alpha}-Cloak, the first no-box adversarial attack on object detectors that operates entirely through the alpha channel of RGBA videos. {\alpha}-Cloak exploits the alpha channel to fuse a malicious target video with a benign video, resulting in a fused video that appears innocuous to human viewers but consistently fools object detectors. Our attack requires no access to model architecture, parameters, or outputs, and introduces no perceptible artifacts. We systematically study the support for alpha channels across common video formats and playback applications, and design a fusion algorithm that ensures visual stealth and compatibility. We evaluate {\alpha}-Cloak on five state-of-the-art object detectors, a vision-language model, and a multi-modal large language model (Gemini-2.0-Flash), demonstrating a 100% attack success rate across all scenarios. Our findings reveal a previously unexplored vulnerability in video-based perception systems, highlighting the urgent need for defenses that account for the alpha channel in adversarial settings.
翻译:随着目标检测模型日益广泛地部署于自动驾驶汽车(AVs)和监控平台等网络物理系统中,确保其对抗性威胁的安全性至关重要。尽管先前的研究已探索了图像领域的对抗攻击,但视频领域的此类攻击在很大程度上仍未得到充分研究,尤其是在无盒设置下。本文提出α-Cloak,这是首个完全通过RGBA视频的alpha通道实施的目标检测器无盒对抗攻击。α-Cloak利用alpha通道将恶意目标视频与良性视频融合,生成对人类观察者看似无害、却能持续欺骗目标检测器的融合视频。我们的攻击无需访问模型架构、参数或输出,且不引入任何可感知的伪影。我们系统性地研究了常见视频格式和播放应用对alpha通道的支持情况,并设计了一种确保视觉隐蔽性和兼容性的融合算法。我们在五种先进目标检测器、一个视觉语言模型以及一个多模态大语言模型(Gemini-2.0-Flash)上评估α-Cloak,在所有场景中均实现了100%的攻击成功率。我们的研究揭示了视频感知系统中一个先前未被探索的脆弱性,凸显了在对抗性环境中考虑alpha通道防御机制的迫切需求。