State-of-the-art rigging methods typically assume a predefined canonical rest pose. However, this assumption does not hold for dynamic mesh sequences such as DyMesh or DT4D, where no canonical T-pose is available. When applied independently frame-by-frame, existing methods lack pose invariance and often yield temporally inconsistent topologies. To address this limitation, we propose SPRig, a general fine-tuning framework that enforces cross-frame consistency across a sequence to learn pose-invariant rigs on top of existing models, covering both skeleton and skinning generation. For skeleton generation, we introduce novel consistency regularization in both token space and geometry space. For skinning, we improve temporal stability through an articulation-invariant consistency loss combined with consistency distillation and structural regularization. Extensive experiments show that SPRig achieves superior temporal coherence and significantly reduces artifacts in prior methods, without sacrificing and often even enhancing per-frame static generation quality. The code is available in the supplemental material and will be made publicly available upon publication.
翻译:现有最先进的绑定方法通常假设存在预定义的规范静止姿态。然而,这一假设对于动态网格序列(如DyMesh或DT4D)并不成立,因为此类序列中不存在规范的T姿态。当现有方法被独立地逐帧应用时,它们缺乏姿态不变性,且常常产生时间上不一致的拓扑结构。为解决这一局限,我们提出了SPRig,一个通用的微调框架。该框架通过强制序列中的跨帧一致性,在现有模型之上学习姿态无关的绑定,涵盖骨架生成与蒙皮生成。对于骨架生成,我们在令牌空间和几何空间中引入了新颖的一致性正则化。对于蒙皮,我们通过结合一致性蒸馏与结构正则化的关节不变一致性损失,提升了时间稳定性。大量实验表明,SPRig在保持甚至经常提升逐帧静态生成质量的同时,实现了卓越的时间一致性,并显著减少了先前方法中的伪影。代码可在补充材料中获取,并将在发表后公开。