Fr\'echet Inception Distance (FID), computed with an ImageNet pretrained Inception-v3 network, is widely used as a state-of-the-art evaluation metric for generative models. It assumes that feature vectors from Inception-v3 follow a multivariate Gaussian distribution and calculates the 2-Wasserstein distance based on their means and covariances. While FID effectively measures how closely synthetic data match real data in many image synthesis tasks, the primary goal in biomedical generative models is often to enrich training datasets ideally with corresponding annotations. For this purpose, the gold standard for evaluating generative models is to incorporate synthetic data into downstream task training, such as classification and segmentation, to pragmatically assess its performance. In this paper, we examine cases from retinal imaging modalities, including color fundus photography and optical coherence tomography, where FID and its related metrics misalign with task-specific evaluation goals in classification and segmentation. We highlight the limitations of using various metrics, represented by FID and its variants, as evaluation criteria for these applications and address their potential caveats in broader biomedical imaging modalities and downstream tasks.
翻译:Fréchet Inception距离(FID)通过ImageNet预训练的Inception-v3网络计算,被广泛用作生成模型的先进评估指标。该指标假设Inception-v3提取的特征向量服从多元高斯分布,并基于其特征均值和协方差计算2-Wasserstein距离。尽管FID在众多图像合成任务中能有效衡量合成数据与真实数据的匹配程度,但生物医学生成模型的主要目标通常在于通过合成带有对应标注的数据来扩充训练集。为此,评估生成模型的黄金标准是将合成数据纳入下游任务(如分类与分割)的训练流程,以实际评估其性能。本文通过视网膜成像模态(包括彩色眼底照相和光学相干断层扫描)的案例,揭示了FID及其相关指标与分类、分割等任务特定评估目标存在偏差的现象。我们重点剖析了以FID及其变体为代表的各类指标作为此类应用评估标准的局限性,并探讨了这些指标在更广泛的生物医学成像模态及下游任务中可能存在的潜在缺陷。