Generative models can now produce photorealistic synthetic data which is virtually indistinguishable from the real data used to train it. This is a significant evolution over previous models which could produce reasonable facsimiles of the training data, but ones which could be visually distinguished from the training data by human evaluation. Recent work on OOD detection has raised doubts that generative model likelihoods are optimal OOD detectors due to issues involving likelihood misestimation, entropy in the generative process, and typicality. We speculate that generative OOD detectors also failed because their models focused on the pixels rather than the semantic content of the data, leading to failures in near-OOD cases where the pixels may be similar but the information content is significantly different. We hypothesize that estimating typical sets using self-supervised learners leads to better OOD detectors. We introduce a novel approach that leverages representation learning, and informative summary statistics based on manifold estimation, to address all of the aforementioned issues. Our method outperforms other unsupervised approaches and achieves state-of-the art performance on well-established challenging benchmarks, and new synthetic data detection tasks.
翻译:生成模型现已能够生成与训练所用真实数据几乎无法区分的光真实感合成数据。相较于早期模型,这是一次重大演进——早期模型虽能生成训练数据的合理复制品,但通过人工评估仍可在视觉上将其与训练数据区分开来。近期关于分布外检测的研究对生成模型似然作为最优分布外检测器的有效性提出质疑,原因涉及似然估计偏差、生成过程的熵以及典型性问题。我们推测生成式分布外检测器的失败还源于其模型关注像素而非数据的语义内容,导致在近分布外场景中出现失效——此类场景中像素可能相似但信息内容存在显著差异。我们假设使用自监督学习器估计典型集能产生更优的分布外检测器。我们提出一种创新方法,该方法利用表示学习及基于流形估计的信息化汇总统计量,以解决所有前述问题。我们的方法在公认的挑战性基准测试和新型合成数据检测任务中,均优于其他无监督方法并取得了最先进的性能表现。