This paper investigates the adversarial robustness of Deep Neural Networks (DNNs) using Information Bottleneck (IB) objectives for task-oriented communication systems. We empirically demonstrate that while IB-based approaches provide baseline resilience against attacks targeting downstream tasks, the reliance on generative models for task-oriented communication introduces new vulnerabilities. Through extensive experiments on several datasets, we analyze how bottleneck depth and task complexity influence adversarial robustness. Our key findings show that Shallow Variational Bottleneck Injection (SVBI) provides less adversarial robustness compared to Deep Variational Information Bottleneck (DVIB) approaches, with the gap widening for more complex tasks. Additionally, we reveal that IB-based objectives exhibit stronger robustness against attacks focusing on salient pixels with high intensity compared to those perturbing many pixels with lower intensity. Lastly, we demonstrate that task-oriented communication systems that rely on generative models to extract and recover salient information have an increased attack surface. The results highlight important security considerations for next-generation communication systems that leverage neural networks for goal-oriented compression.
翻译:本文研究了在面向任务通信系统中采用信息瓶颈(IB)目标的深度神经网络(DNN)的对抗鲁棒性。我们通过实验证明,虽然基于IB的方法对针对下游任务的攻击提供了基线韧性,但面向任务通信对生成模型的依赖引入了新的脆弱性。通过在多个数据集上进行大量实验,我们分析了瓶颈深度和任务复杂性如何影响对抗鲁棒性。我们的主要发现表明,与深度变分信息瓶颈(DVIB)方法相比,浅层变分瓶颈注入(SVBI)提供的对抗鲁棒性较低,且对于更复杂的任务,这种差距会扩大。此外,我们发现,与扰动许多低强度像素的攻击相比,基于IB的目标对聚焦于高强度显著像素的攻击表现出更强的鲁棒性。最后,我们证明了依赖生成模型来提取和恢复显著信息的面向任务通信系统具有更大的攻击面。这些结果突显了利用神经网络进行目标导向压缩的新一代通信系统所需考虑的重要安全因素。