Image and multimodal machine learning tasks are very challenging to solve in the case of poorly distributed data. In particular, data availability and privacy restrictions exacerbate these hurdles in the medical domain. The state of the art in image generation quality is held by Latent Diffusion models, making them prime candidates for tackling this problem. However, a few key issues still need to be solved, such as the difficulty in generating data from under-represented classes and a slow inference process. To mitigate these issues, we propose a new method for image augmentation in long-tailed data based on leveraging the rich latent space of pre-trained Stable Diffusion Models. We create a modified separable latent space to mix head and tail class examples. We build this space via Iterated Learning of underlying sparsified embeddings, which we apply to task-specific saliency maps via a K-NN approach. Code is available at https://github.com/SugarFreeManatee/Feature-Space-Augmentation-and-Iterated-Learning
翻译:图像及多模态机器学习任务在数据分布不均的情况下极具挑战性。尤其在医学领域,数据可用性及隐私限制进一步加剧了这些困难。当前图像生成质量的最优方法由潜在扩散模型(Latent Diffusion models)实现,使其成为解决该问题的首选方案。然而,仍存在关键问题亟待解决,例如从低表征类别中生成数据的困难以及推理过程缓慢。为缓解这些问题,我们提出一种基于预训练稳定扩散模型(Stable Diffusion Models)丰富潜在空间的长尾数据图像增强新方法。通过构建修正的可分离潜在空间,将头部与尾部类别样本混合,并利用底层稀疏化嵌入的迭代学习(Iterated Learning)构建该空间,最终通过K近邻(K-NN)方法应用于任务特定的显著性图。代码开源地址:https://github.com/SugarFreeManatee/Feature-Space-Augmentation-and-Iterated-Learning