State-of-the-art rigging methods assume a canonical rest pose--an assumption that fails for sequential data (e.g., animal motion capture or AIGC/video-derived mesh sequences) that lack the T-pose. Applied frame-by-frame, these methods are not pose-invariant and produce topological inconsistencies across frames. Thus We propose SPRig, a general fine-tuning framework that enforces cross-frame consistency losses to learn pose-invariant rigs on top of existing models. We validate our approach on rigging using a new permutation-invariant stability protocol. Experiments demonstrate SOTA temporal stability: our method produces coherent rigs from challenging sequences and dramatically reduces the artifacts that plague baseline methods. The code will be released publicly upon acceptance.
翻译:现有最先进的绑定方法通常假设存在标准静止姿态——这一假设对于缺乏T姿态的序列数据(例如动物运动捕捉或AIGC/视频衍生的网格序列)并不成立。逐帧应用这些方法时,它们无法实现姿态无关性,并会导致跨帧的拓扑结构不一致。为此,我们提出SPRig,一个通用的微调框架,通过在现有模型基础上施加跨帧一致性损失来学习姿态无关的绑定。我们采用一种新的置换不变稳定性协议来验证绑定效果。实验表明,本方法在时间稳定性上达到最先进水平:能够从具有挑战性的序列中生成连贯的绑定,并显著减少了困扰基线方法的伪影。代码将在论文录用后公开发布。