In this paper, we introduce DOUST, our method applying test-time training for outlier detection, significantly improving the detection performance. After thoroughly evaluating our algorithm on common benchmark datasets, we discuss a common problem and show that it disappears with a large enough test set. Thus, we conclude that under reasonable conditions, our algorithm can reach almost supervised performance even when no labeled outliers are given.
翻译:在本文中,我们提出DOUST方法,将测试时训练应用于异常检测,显著提升了检测性能。在对常用基准数据集进行充分评估后,我们讨论了一个常见问题,并证明当测试集足够大时该问题会消失。因此,我们得出结论:在合理条件下,即使没有标注异常样本,我们的算法也能达到近乎监督学习的性能水平。