The basic body shape of a person does not change within a single video. However, most SOTA human mesh estimation (HME) models output a slightly different body shape for each video frame, which results in inconsistent body shapes for the same person. In contrast, we leverage anthropometric measurements like tailors are already obtaining from humans for centuries. We create a model called A2B that converts such anthropometric measurements to body shape parameters of human mesh models. Moreover, we find that finetuned SOTA 3D human pose estimation (HPE) models outperform HME models regarding the precision of the estimated keypoints. We show that applying inverse kinematics (IK) to the results of such a 3D HPE model and combining the resulting body pose with the A2B body shape leads to superior and consistent human meshes for challenging datasets like ASPset or fit3D, where we can lower the MPJPE by over 30 mm compared to SOTA HME models. Further, replacing HME models estimates of the body shape parameters with A2B model results not only increases the performance of these HME models, but also leads to consistent body shapes.
翻译:人的基本体型在单个视频中不会发生变化。然而,大多数最先进的人体网格估计模型为每一视频帧输出略有差异的体型,导致同一人物的体型不一致。相比之下,我们借鉴了裁缝数百年来已从人体获取的人体测量学数据。我们创建了一个名为A2B的模型,可将此类人体测量数据转换为人体网格模型的体型参数。此外,我们发现经过微调的最先进三维人体姿态估计模型在关键点估计精度方面优于人体网格估计模型。我们证明,对此类三维人体姿态估计模型的结果应用逆向运动学,并将生成的身体姿态与A2B体型相结合,可为ASPSet或fit3D等具有挑战性的数据集生成更优且一致的人体网格——相比最先进的人体网格估计模型,我们能将MPJPE降低超过30毫米。进一步地,用A2B模型的结果替换人体网格估计模型对体型参数的估计,不仅提升了这些人体网格估计模型的性能,还实现了体型的一致性。