Distributed deep neural networks (DNNs) have emerged as a key technique to reduce communication overhead without sacrificing performance in edge computing systems. Recently, entropy coding has been introduced to further reduce the communication overhead. The key idea is to train the distributed DNN jointly with an entropy model, which is used as side information during inference time to adaptively encode latent representations into bit streams with variable length. To the best of our knowledge, the resilience of entropy models is yet to be investigated. As such, in this paper we formulate and investigate the resilience of entropy models to intentional interference (e.g., adversarial attacks) and unintentional interference (e.g., weather changes and motion blur). Through an extensive experimental campaign with 3 different DNN architectures, 2 entropy models and 4 rate-distortion trade-off factors, we demonstrate that the entropy attacks can increase the communication overhead by up to 95%. By separating compression features in frequency and spatial domain, we propose a new defense mechanism that can reduce the transmission overhead of the attacked input by about 9% compared to unperturbed data, with only about 2% accuracy loss. Importantly, the proposed defense mechanism is a standalone approach which can be applied in conjunction with approaches such as adversarial training to further improve robustness. Code will be shared for reproducibility.
翻译:分布式深度神经网络已成为边缘计算系统中在不牺牲性能的前提下降低通信开销的关键技术。近期,熵编码被引入以进一步减少通信开销。其核心思想是将分布式深度神经网络与熵模型联合训练,该熵模型在推理阶段作为边信息,用于将潜在表示自适应地编码为可变长度的比特流。据我们所知,熵模型的鲁棒性尚未得到充分研究。为此,本文系统阐述并研究了熵模型对有意干扰(如对抗攻击)和无意干扰(如天气变化与运动模糊)的鲁棒性。通过采用3种不同的深度神经网络架构、2种熵模型及4种速率-失真权衡因子开展大规模实验,我们证明熵攻击可使通信开销增加高达95%。通过分离频域与空域的压缩特征,我们提出了一种新的防御机制,该机制可使受攻击输入的传输开销相较于未扰动数据降低约9%,而精度损失仅约2%。重要的是,所提出的防御机制是一种独立方法,可与对抗训练等方法结合使用以进一步提升鲁棒性。代码将公开以确保可复现性。