Capturing accurate 3D human pose in the wild would provide valuable data for training pose estimation and motion generation methods. While video-based estimation approaches have become increasingly accurate, they often fail in common scenarios involving self-contact, such as a hand touching the face. In contrast, wearable bioimpedance sensing can cheaply and unobtrusively measure ground-truth skin-to-skin contact. Consequently, we propose a novel framework that combines visual pose estimators with bioimpedance sensing to capture the 3D pose of people by taking self-contact into account. Our method, BioTUCH, initializes the pose using an off-the-shelf estimator and introduces contact-aware pose optimization during measured self-contact: reprojection error and deviations from the input estimate are minimized while enforcing vertex proximity constraints. We validate our approach using a new dataset of synchronized RGB video, bioimpedance measurements, and 3D motion capture. Testing with three input pose estimators, we demonstrate an average of 11.7% improvement in reconstruction accuracy. We also present a miniature wearable bioimpedance sensor that enables efficient large-scale collection of contact-aware training data for improving pose estimation and generation using BioTUCH. Code and data are available at biotuch.is.tue.mpg.de
翻译:在自然场景中捕获精确的3D人体姿态将为姿态估计和运动生成方法的训练提供宝贵数据。尽管基于视频的估计方法已日益精确,但在涉及自接触(如手部触碰面部)的常见场景中仍常失效。相比之下,可穿戴生物阻抗传感能够以低成本、非侵入性的方式测量真实的皮肤间接触。为此,我们提出了一种新颖框架,通过结合视觉姿态估计器与生物阻抗传感,在考虑自接触的情况下捕获人体的3D姿态。我们的方法BioTUCH首先使用现成的估计器初始化姿态,并在检测到自接触时引入接触感知的姿态优化:在强制顶点邻近约束的同时,最小化重投影误差与输入估计值的偏差。我们通过一个包含同步RGB视频、生物阻抗测量和3D动作捕捉的新数据集验证了该方法。使用三种输入姿态估计器进行测试,我们证明了重建精度平均提升11.7%。我们还展示了一种微型可穿戴生物阻抗传感器,能够高效大规模收集接触感知训练数据,以利用BioTUCH改进姿态估计与生成。代码与数据可在biotuch.is.tue.mpg.de获取。