Building upon recent structural disentanglement frameworks for sign language production, we propose A$^{2}$V-SLP, an alignment-aware variational framework that learns articulator-wise disentangled latent distributions rather than deterministic embeddings. A disentangled Variational Autoencoder (VAE) encodes ground-truth sign pose sequences and extracts articulator-specific mean and variance vectors, which are used as distributional supervision for training a non-autoregressive Transformer. Given text embeddings, the Transformer predicts both latent means and log-variances, while the VAE decoder reconstructs the final sign pose sequences through stochastic sampling at the decoding stage. This formulation maintains articulator-level representations by avoiding deterministic latent collapse through distributional latent modeling. In addition, we integrate a gloss attention mechanism to strengthen alignment between linguistic input and articulated motion. Experimental results show consistent gains over deterministic latent regression, achieving state-of-the-art back-translation performance and improved motion realism in a fully gloss-free setting.
翻译:基于近期用于手语生成的结构化解耦框架,我们提出了A$^{2}$V-SLP,一种对齐感知的变分框架。该框架学习的是发音器官层面的解耦潜在分布,而非确定性嵌入。一个解耦变分自编码器(VAE)编码真实手语姿态序列,并提取特定于发音器官的均值向量和方差向量,这些向量被用作训练非自回归Transformer的分布监督信号。给定文本嵌入,该Transformer同时预测潜在均值和对数方差,而VAE解码器则通过在解码阶段进行随机采样来重建最终的手语姿态序列。这种表述通过分布式的潜在建模避免了确定性的潜在坍缩,从而保持了发音器官层面的表征。此外,我们集成了一个词目注意力机制,以加强语言输入与发音动作之间的对齐。实验结果表明,相较于确定性的潜在回归方法,本框架取得了持续的性能提升,在完全无词目设置的条件下,实现了最先进的回译性能并提升了动作的真实感。