While deep generative models (DGMs) have gained popularity, their susceptibility to biases and other inefficiencies that lead to undesirable outcomes remains an issue. With their growing complexity, there is a critical need for early detection of issues to achieve desired results and optimize resources. Hence, we introduce a progressive analysis framework to monitor the training process of DGMs. Our method utilizes dimensionality reduction techniques to facilitate the inspection of latent representations, the generated and real distributions, and their evolution across training iterations. This monitoring allows us to pause and fix the training method if the representations or distributions progress undesirably. This approach allows for the analysis of a models' training dynamics and the timely identification of biases and failures, minimizing computational loads. We demonstrate how our method supports identifying and mitigating biases early in training a Generative Adversarial Network (GAN) and improving the quality of the generated data distribution.
翻译:尽管深度生成模型(DGMs)日益普及,但其对偏差及其他导致不良结果的低效因素的敏感性仍然是一个问题。随着模型复杂度的增长,亟需通过早期问题检测来实现预期结果并优化资源利用。为此,我们提出了一种渐进式分析框架来监控DGMs的训练过程。该方法利用降维技术,实现对潜在表征、生成分布与真实分布及其在训练迭代中演化过程的监测。这种监控机制使得我们能够在表征或分布出现不良演化趋势时暂停并修正训练方法。该框架支持对模型训练动态的分析,并能及时识别偏差与故障,从而最小化计算负荷。我们通过实验展示了该方法如何在生成对抗网络(GAN)训练中支持早期偏差识别与缓解,并提升生成数据分布的质量。