Adversarial robustness of neural networks is an increasingly important area of research, combining studies on computer vision models, large language models (LLMs), and others. With the release of JPEG AI - the first standard for end-to-end neural image compression (NIC) methods - the question of its robustness has become critically significant. JPEG AI is among the first international, real-world applications of neural-network-based models to be embedded in consumer devices. However, research on NIC robustness has been limited to open-source codecs and a narrow range of attacks. This paper proposes a new methodology for measuring NIC robustness to adversarial attacks. We present the first large-scale evaluation of JPEG AI's robustness, comparing it with other NIC models. Our evaluation results and code are publicly available online (link is hidden for a blind review).
翻译:神经网络的对抗鲁棒性研究日益重要,其结合了计算机视觉模型、大语言模型(LLMs)等领域的研究。随着首个端到端神经图像压缩(NIC)方法标准——JPEG AI的发布,其鲁棒性问题变得至关重要。JPEG AI是首批将基于神经网络的模型嵌入消费设备的国际性实际应用之一。然而,目前关于NIC鲁棒性的研究仅限于开源编解码器和有限的攻击类型。本文提出了一种衡量NIC对抗攻击鲁棒性的新方法论。我们首次对JPEG AI的鲁棒性进行了大规模评估,并将其与其他NIC模型进行了比较。我们的评估结果与代码已在线公开(链接因盲审而隐藏)。