We present Multi-HMR, a strong sigle-shot model for multi-person 3D human mesh recovery from a single RGB image. Predictions encompass the whole body, i.e., including hands and facial expressions, using the SMPL-X parametric model and 3D location in the camera coordinate system. Our model detects people by predicting coarse 2D heatmaps of person locations, using features produced by a standard Vision Transformer (ViT) backbone. It then predicts their whole-body pose, shape and 3D location using a new cross-attention module called the Human Prediction Head (HPH), with one query attending to the entire set of features for each detected person. As direct prediction of fine-grained hands and facial poses in a single shot, i.e., without relying on explicit crops around body parts, is hard to learn from existing data, we introduce CUFFS, the Close-Up Frames of Full-Body Subjects dataset, containing humans close to the camera with diverse hand poses. We show that incorporating it into the training data further enhances predictions, particularly for hands. Multi-HMR also optionally accounts for camera intrinsics, if available, by encoding camera ray directions for each image token. This simple design achieves strong performance on whole-body and body-only benchmarks simultaneously: a ViT-S backbone on $448{\times}448$ images already yields a fast and competitive model, while larger models and higher resolutions obtain state-of-the-art results.
翻译:我们提出Multi-HMR,一种强大的单次拍摄模型,用于从单张RGB图像中恢复多人三维人体网格。预测结果涵盖全身,即包括手部和面部表情,采用SMPL-X参数化模型并包含相机坐标系中的三维位置。我们的模型通过预测人体位置的粗略二维热图来检测人物,这些热图使用标准视觉Transformer(ViT)主干网络生成的特征。随后,模型利用一种名为人体预测头(HPH)的新型交叉注意力模块,预测每个人的全身姿态、形状和三维位置,该模块为每个检测到的人体使用一个查询来关注整个特征集。由于在单次拍摄中直接预测精细的手部和面部姿态(即不依赖于对身体部位的显式裁剪)难以从现有数据中学习,我们引入了CUFFS数据集(全身对象特写帧数据集),其中包含靠近相机且具有多样化手部姿态的人物。我们证明,将该数据集纳入训练数据能进一步提升预测效果,特别是手部姿态。Multi-HMR还可选择性地利用相机内参(若可用),通过对每个图像标记的相机射线方向进行编码来实现。这种简洁的设计在全身体和仅身体基准测试中均实现了强劲性能:在$448{\times}448$图像上使用ViT-S主干网络即可得到一个快速且具有竞争力的模型,而更大的模型和更高分辨率则可获得最先进的结果。