In the era of generative AI, deep generative models (DGMs) with latent representations have gained tremendous popularity. Despite their impressive empirical performance, the statistical properties of these models remain underexplored. DGMs are often overparametrized, non-identifiable, and uninterpretable black boxes, raising serious concerns when deploying them in high-stakes applications. Motivated by this, we propose an interpretable deep generative modeling framework for rich data types with discrete latent layers, called Deep Discrete Encoders (DDEs). A DDE is a directed graphical model with multiple binary latent layers. Theoretically, we propose transparent identifiability conditions for DDEs, which imply progressively smaller sizes of the latent layers as they go deeper. Identifiability ensures consistent parameter estimation and inspires an interpretable design of the deep architecture. Computationally, we propose a scalable estimation pipeline of a layerwise nonlinear spectral initialization followed by a penalized stochastic approximation EM algorithm. This procedure can efficiently estimate models with exponentially many latent components. Extensive simulation studies validate our theoretical results and demonstrate the proposed algorithms' excellent performance. We apply DDEs to three diverse real datasets for hierarchical topic modeling, image representation learning, response time modeling in educational testing, and obtain interpretable findings.
翻译:在生成式人工智能时代,具有潜在表示的深度生成模型(DGMs)获得了极大的普及。尽管这些模型在实证上表现卓越,但其统计特性仍未得到充分探索。深度生成模型通常过度参数化、不可识别且是难以解释的黑箱,这使其在高风险应用中的部署引发了严重关切。受此启发,我们提出了一种针对具有离散潜层的丰富数据类型的可解释深度生成建模框架,称为深度离散编码器(DDEs)。DDE是一种具有多个二元潜层的有向图模型。理论上,我们为DDE提出了透明的可识别性条件,这些条件意味着随着潜层加深,其尺寸逐渐减小。可识别性确保了参数估计的一致性,并启发了深度架构的可解释性设计。在计算方面,我们提出了一种可扩展的估计流程:先进行逐层非线性谱初始化,随后执行惩罚性随机近似EM算法。该流程能够高效估计具有指数级多潜在分量的模型。广泛的模拟研究验证了我们的理论结果,并证明了所提算法的优异性能。我们将DDE应用于三个不同的真实数据集,分别用于分层主题建模、图像表示学习以及教育测试中的反应时间建模,并获得了可解释的发现。