The ability to cope with out-of-distribution (OOD) corruptions and adversarial attacks is crucial in real-world safety-demanding applications. In this study, we develop a general mechanism to increase neural network robustness based on focus analysis. Recent studies have revealed the phenomenon of \textit{Overfocusing}, which leads to a performance drop. When the network is primarily influenced by small input regions, it becomes less robust and prone to misclassify under noise and corruptions. However, quantifying overfocusing is still vague and lacks clear definitions. Here, we provide a mathematical definition of \textbf{focus}, \textbf{overfocusing} and \textbf{underfocusing}. The notions are general, but in this study, we specifically investigate the case of 3D point clouds. We observe that corrupted sets result in a biased focus distribution compared to the clean training set. We show that as focus distribution deviates from the one learned in the training phase - classification performance deteriorates. We thus propose a parameter-free \textbf{refocusing} algorithm that aims to unify all corruptions under the same distribution. We validate our findings on a 3D zero-shot classification task, achieving SOTA in robust 3D classification on ModelNet-C dataset, and in adversarial defense against Shape-Invariant attack. Code is available in: https://github.com/yossilevii100/refocusing.
翻译:在现实安全关键应用中,应对分布外(OOD)损坏和对抗攻击的能力至关重要。本研究基于聚焦分析,开发了一种提升神经网络鲁棒性的通用机制。近期研究揭示了导致性能下降的“过度聚焦”现象。当网络主要受输入中微小区域的影响时,其鲁棒性会降低,易在噪声和损坏条件下产生错误分类。然而,过度聚焦的量化仍存在模糊性且缺乏明确定义。本文为**聚焦**、**过度聚焦**和**欠聚焦**提供了数学定义。这些概念具有通用性,但本研究专门探讨了三维点云场景。我们观察到,与干净训练集相比,损坏数据集会导致有偏差的聚焦分布。研究表明,当聚焦分布偏离训练阶段学习到的分布时,分类性能会恶化。因此,我们提出了一种无参数的**重聚焦**算法,旨在将所有损坏统一到相同分布下。我们在三维零样本分类任务上验证了该发现,在ModelNet-C数据集上实现了鲁棒三维分类的最优水平(SOTA),并在针对形状不变攻击的对抗防御中取得了领先性能。代码地址:https://github.com/yossilevii100/refocusing。