Deep neural networks (DNNs) are widely used in real-world applications, yet they remain vulnerable to errors and adversarial attacks. Formal verification offers a systematic approach to identify and mitigate these vulnerabilities, enhancing model robustness and reliability. While most existing verification methods focus on image classification models, this work extends formal verification to the more complex domain of emph{object detection} models. We propose a formulation for verifying the robustness of such models and demonstrate how state-of-the-art verification tools, originally developed for classification, can be adapted for this purpose. Our experiments, conducted on various datasets and networks, highlight the ability of formal verification to uncover vulnerabilities in object detection models, underscoring the need to extend verification efforts to this domain. This work lays the foundation for further research into formal verification across a broader range of computer vision applications.
翻译:深度神经网络(DNNs)在现实世界应用中已被广泛使用,但它们仍然容易受到错误和对抗性攻击的影响。形式化验证提供了一种系统性的方法来识别和缓解这些脆弱性,从而增强模型的鲁棒性和可靠性。尽管现有的大多数验证方法都集中在图像分类模型上,但本研究将形式化验证扩展到更为复杂的*目标检测*模型领域。我们提出了一种用于验证此类模型鲁棒性的形式化框架,并展示了如何将最初为分类任务开发的最先进验证工具进行调整以用于此目的。我们在多个数据集和网络上进行的实验,凸显了形式化验证在揭示目标检测模型脆弱性方面的能力,强调了将验证工作扩展到该领域的必要性。本研究为进一步研究更广泛的计算机视觉应用中的形式化验证奠定了基础。