Generative artificial intelligence has transformed the generation of synthetic data, providing innovative solutions to challenges like data scarcity and privacy, which are particularly critical in fields such as medicine. However, the effective use of this synthetic data to train high-performance models remains a significant challenge. This paper addresses this issue by introducing Knowledge Recycling (KR), a pipeline designed to optimise the generation and use of synthetic data for training downstream classifiers. At the heart of this pipeline is Generative Knowledge Distillation (GKD), the proposed technique that significantly improves the quality and usefulness of the information provided to classifiers through a synthetic dataset regeneration and soft labelling mechanism. The KR pipeline has been tested on a variety of datasets, with a focus on six highly heterogeneous medical image datasets, ranging from retinal images to organ scans. The results show a significant reduction in the performance gap between models trained on real and synthetic data, with models based on synthetic data outperforming those trained on real data in some cases. Furthermore, the resulting models show almost complete immunity to Membership Inference Attacks, manifesting privacy properties missing in models trained with conventional techniques.
翻译:生成式人工智能已经彻底改变了合成数据的生成方式,为数据稀缺和隐私等挑战提供了创新解决方案,这在医学等领域尤为关键。然而,如何有效利用这些合成数据来训练高性能模型仍然是一个重大挑战。本文通过引入知识回收(Knowledge Recycling, KR)流程来解决这一问题,该流程旨在优化合成数据的生成与使用,以训练下游分类器。该流程的核心是生成式知识蒸馏(Generative Knowledge Distillation, GKD),这项提出的技术通过合成数据集再生和软标签机制,显著提高了提供给分类器的信息质量与实用性。KR流程已在多种数据集上进行了测试,重点关注从视网膜图像到器官扫描的六个高度异质的医学图像数据集。结果表明,基于真实数据与合成数据训练的模型之间的性能差距显著缩小,在某些情况下,基于合成数据训练的模型甚至优于基于真实数据训练的模型。此外,所得模型对成员推理攻击表现出几乎完全的免疫力,展现出传统训练技术所缺乏的隐私特性。