Generative models unfairly penalize data belonging to minority classes, suffer from model autophagy disorder (MADness), and learn biased estimates of the underlying distribution parameters. Our theoretical and empirical results show that training generative models with intentionally designed hypernetworks leads to models that 1) are more fair when generating datapoints belonging to minority classes 2) are more stable in a self-consumed (i.e., MAD) setting, and 3) learn parameters that are less statistically biased. To further mitigate unfairness, MADness, and bias, we introduce a regularization term that penalizes discrepancies between a generative model's estimated weights when trained on real data versus its own synthetic data. To facilitate training existing deep generative models within our framework, we offer a scalable implementation of hypernetworks that automatically generates a hypernetwork architecture for any given generative model.
翻译:生成模型对少数类数据存在不公平惩罚,遭受模型自噬紊乱(MADness)的困扰,并且学习到的底层分布参数估计存在偏差。我们的理论与实证结果表明,通过精心设计的超网络训练生成模型,能够获得以下优势:1)在生成少数类数据点时更具公平性;2)在自消耗(即MAD)场景下表现更稳定;3)学习到的参数统计偏差更小。为进一步缓解不公平性、自噬紊乱与参数偏差,我们引入一种正则化项,用于惩罚生成模型在真实数据与自身合成数据上训练所得权重估计之间的差异。为便于在现有深度生成模型中实施本框架,我们提供了超网络的可扩展实现方案,能够为任意给定生成模型自动构建超网络架构。