Generative models can now produce photorealistic synthetic data which is virtually indistinguishable from the real data used to train it. This is a significant evolution over previous models which could produce reasonable facsimiles of the training data, but ones which could be visually distinguished from the training data by human evaluation. Recent work on OOD detection has raised doubts that generative model likelihoods are optimal OOD detectors due to issues involving likelihood misestimation, entropy in the generative process, and typicality. We speculate that generative OOD detectors also failed because their models focused on the pixels rather than the semantic content of the data, leading to failures in near-OOD cases where the pixels may be similar but the information content is significantly different. We hypothesize that estimating typical sets using self-supervised learners leads to better OOD detectors. We introduce a novel approach that leverages representation learning, and informative summary statistics based on manifold estimation, to address all of the aforementioned issues. Our method outperforms other unsupervised approaches and achieves state-of-the art performance on well-established challenging benchmarks, and new synthetic data detection tasks.
翻译:生成模型现已能够生成与训练所用真实数据几乎无法区分的逼真合成数据。这相较于先前模型是重大进步——早期模型虽能生成合理的训练数据近似副本,但人类评估仍可凭视觉将其与训练数据区分。近期关于分布外检测的研究对生成模型似然作为最优分布外检测器的有效性提出质疑,原因涉及似然误估计、生成过程熵及典型性等问题。我们推测生成式分布外检测器的失败还源于其模型过度关注像素而非数据的语义内容,导致在近分布外场景中出现失效——此类场景中像素可能相似但信息内容差异显著。我们假设通过自监督学习器估计典型集能构建更优的分布外检测器。本文提出一种创新方法,利用表示学习及基于流形估计的信息化汇总统计量,以解决上述所有问题。该方法在公认的挑战性基准测试及新型合成数据检测任务中,均超越其他无监督方法并取得最先进的性能表现。