Reconstructing detailed 3D human meshes from a single in-the-wild image remains a fundamental challenge in computer vision. Existing SMPLX-based methods often suffer from slow inference, produce only coarse body poses, and exhibit misalignments or unnatural artifacts in fine-grained regions such as the face and hands. These issues make current approaches difficult to apply to downstream tasks. To address these challenges, we propose PEAR-a fast and robust framework for pixel-aligned expressive human mesh recovery. PEAR explicitly tackles three major limitations of existing methods: slow inference, inaccurate localization of fine-grained human pose details, and insufficient facial expression capture. Specifically, to enable real-time SMPLX parameter inference, we depart from prior designs that rely on high resolution inputs or multi-branch architectures. Instead, we adopt a clean and unified ViT-based model capable of recovering coarse 3D human geometry. To compensate for the loss of fine-grained details caused by this simplified architecture, we introduce pixel-level supervision to optimize the geometry, significantly improving the reconstruction accuracy of fine-grained human details. To make this approach practical, we further propose a modular data annotation strategy that enriches the training data and enhances the robustness of the model. Overall, PEAR is a preprocessing-free framework that can simultaneously infer EHM-s (SMPLX and scaled-FLAME) parameters at over 100 FPS. Extensive experiments on multiple benchmark datasets demonstrate that our method achieves substantial improvements in pose estimation accuracy compared to previous SMPLX-based approaches. Project page: https://wujh2001.github.io/PEAR
翻译:从单张自然场景图像重建精细的三维人体网格仍然是计算机视觉领域的一项基础性挑战。现有的基于SMPLX的方法通常存在推理速度慢、仅能生成粗略身体姿态,以及在面部和手部等细粒度区域出现错位或不自然伪影的问题。这些缺陷使得现有方法难以应用于下游任务。为应对这些挑战,我们提出PEAR——一个快速、鲁棒的像素对齐富有表现力人体网格恢复框架。PEAR明确针对现有方法的三个主要局限:推理速度慢、细粒度人体姿态细节定位不准确,以及面部表情捕捉不足。具体而言,为实现实时的SMPLX参数推断,我们摒弃了先前依赖高分辨率输入或多分支架构的设计,转而采用一个简洁统一的基于ViT的模型,该模型能够恢复粗略的三维人体几何。为弥补这一简化架构导致的细粒度细节损失,我们引入像素级监督来优化几何,显著提升了细粒度人体细节的重建精度。为使该方法更具实用性,我们进一步提出了一种模块化数据标注策略,以丰富训练数据并增强模型的鲁棒性。总体而言,PEAR是一个无需预处理即可同时以超过100 FPS的速度推断EHM-s(SMPLX与scaled-FLAME)参数的框架。在多个基准数据集上的大量实验表明,相较于先前基于SMPLX的方法,我们的方法在姿态估计精度上取得了显著提升。项目页面:https://wujh2001.github.io/PEAR