Object-based Novelty Detection (ND) aims to identify unknown objects that do not belong to classes seen during training by an object detection model. The task is particularly crucial in real-world applications, as it allows to avoid potentially harmful behaviours, e.g. as in the case of object detection models adopted in a self-driving car or in an autonomous robot. Traditional approaches to ND focus on one time offline post processing of the pretrained object detection output, leaving no possibility to improve the model robustness after training and discarding the abundant amount of out-of-distribution data encountered during deployment. In this work, we propose a novel framework for object-based ND, assuming that human feedback can be requested on the predicted output and later incorporated to refine the ND model without negatively affecting the main object detection performance. This refinement operation is repeated whenever new feedback is available. To tackle this new formulation of the problem for object detection, we propose a lightweight ND module attached on top of a pre-trained object detection model, which is incrementally updated through a feedback loop. We also propose a new benchmark to evaluate methods on this new setting and test extensively our ND approach against baselines, showing increased robustness and a successful incorporation of the received feedback.
翻译:基于对象的新颖性检测旨在通过目标检测模型识别训练期间未见类别中的未知对象。该任务在现实应用中尤为重要,例如在自动驾驶汽车或自主机器人采用的目标检测模型中,能够避免潜在的危险行为。传统的新颖性检测方法侧重于对预训练目标检测输出进行一次性离线后处理,既无法在训练后提升模型鲁棒性,也丢弃了部署期间遇到的大量分布外数据。本文提出一种新颖的基于对象的新颖性检测框架,该框架假设可对预测输出请求人工反馈,并将反馈信息融入模型以优化新颖性检测模块,同时不影响主要目标检测性能。每当获得新反馈时,即可重复执行此优化操作。为应对目标检测领域的这一新问题表述,我们提出一种轻量级新颖性检测模块,该模块附加于预训练目标检测模型之上,通过反馈循环进行增量式更新。同时,我们构建了新的基准测试来评估该设定下的方法性能,并通过大量实验证明:相较于基线方法,我们提出的新颖性检测方法具有更强的鲁棒性,并能成功融合接收到的反馈信息。