We present X-MDPT ($\underline{Cross}$-view $\underline{M}$asked $\underline{D}$iffusion $\underline{P}$rediction $\underline{T}$ransformers), a novel diffusion model designed for pose-guided human image generation. X-MDPT distinguishes itself by employing masked diffusion transformers that operate on latent patches, a departure from the commonly-used Unet structures in existing works. The model comprises three key modules: 1) a denoising diffusion Transformer, 2) an aggregation network that consolidates conditions into a single vector for the diffusion process, and 3) a mask cross-prediction module that enhances representation learning with semantic information from the reference image. X-MDPT demonstrates scalability, improving FID, SSIM, and LPIPS with larger models. Despite its simple design, our model outperforms state-of-the-art approaches on the DeepFashion dataset while exhibiting efficiency in terms of training parameters, training time, and inference speed. Our compact 33MB model achieves an FID of 7.42, surpassing a prior Unet latent diffusion approach (FID 8.07) using only $11\times$ fewer parameters. Our best model surpasses the pixel-based diffusion with $\frac{2}{3}$ of the parameters and achieves $5.43 \times$ faster inference. The code is available at https://github.com/trungpx/xmdpt.
翻译:我们提出了X-MDPT(跨视角掩码扩散预测Transformer),这是一种为姿态引导的人物图像生成而设计的新型扩散模型。X-MDPT的独特之处在于采用了在潜在图像块上操作的掩码扩散Transformer,这有别于现有工作中常用的Unet结构。该模型包含三个关键模块:1)去噪扩散Transformer,2)将多种条件整合为单一向量以供扩散过程使用的聚合网络,以及3)利用参考图像语义信息增强表示学习的掩码交叉预测模块。X-MDPT展现了良好的可扩展性,更大的模型能持续提升FID、SSIM和LPIPS指标。尽管设计简洁,我们的模型在DeepFashion数据集上超越了现有最优方法,同时在训练参数量、训练时间和推理速度方面表现出高效性。我们紧凑的33MB模型实现了7.42的FID,仅使用$\frac{1}{11}$的参数量就超越了先前基于Unet的潜在扩散方法(FID 8.07)。我们的最佳模型以$\frac{2}{3}$的参数量超越了基于像素的扩散方法,并实现了$5.43$倍的推理加速。代码发布于https://github.com/trungpx/xmdpt。