We propose a generative model termed Deciphering Autoencoders. In this model, we assign a unique random dropout pattern to each data point in the training dataset and then train an autoencoder to reconstruct the corresponding data point using this pattern as information to be encoded. Even if a completely random dropout pattern is assigned to each data point regardless of their similarities, a sufficiently large encoder can smoothly map them to a low-dimensional latent space to reconstruct individual training data points. During inference, using a dropout pattern different from those used during training allows the model to function as a generator. Since the training of Deciphering Autoencoders relies solely on reconstruction error, it offers more stable training compared to other generative models. Despite their simplicity, Deciphering Autoencoders show sampling quality comparable to DCGAN on the CIFAR-10 dataset.
翻译:我们提出了一种称为解码自编码器的生成模型。在该模型中,我们为训练数据集中的每个数据点分配一个唯一的随机丢弃模式,然后训练一个自编码器,以该模式作为待编码信息来重建相应的数据点。即使为每个数据点分配完全随机的丢弃模式(无论其相似性如何),一个足够大的编码器也能将其平滑地映射到低维潜在空间,以重建各个训练数据点。在推理过程中,使用与训练期间不同的丢弃模式可以使模型充当生成器。由于解码自编码器的训练仅依赖于重建误差,与其他生成模型相比,它提供了更稳定的训练过程。尽管结构简单,解码自编码器在CIFAR-10数据集上显示出与DCGAN相当的采样质量。