Occlusion is one of the challenging issues when estimating 3D hand pose. This problem becomes more prominent when hand interacts with an object or two hands are involved. In the past works, much attention has not been given to these occluded regions. But these regions contain important and beneficial information that is vital for 3D hand pose estimation. Thus, in this paper, we propose an occlusion robust and accurate method for the estimation of 3D hand-object pose from the input RGB image. Our method includes first localising the hand joints using a CNN based model and then refining them by extracting contextual information. The self attention transformer then identifies the specific joints along with the hand identity. This helps the model to identify the hand belongingness of a particular joint which helps to detect the joint even in the occluded region. Further, these joints with hand identity are then used to estimate the pose using cross attention mechanism. Thus, by identifying the joints in the occluded region, the obtained network becomes robust to occlusion. Hence, this network achieves state-of-the-art results when evaluated on the InterHand2.6M, HO3D and H$_2$O3D datasets.
翻译:遮挡是三维手部姿态估计中的一个具有挑战性的问题。当手与物体交互或涉及双手时,这一问题变得更加突出。在以往的工作中,这些被遮挡区域并未得到足够关注。然而,这些区域包含对三维手部姿态估计至关重要且有益的信息。因此,本文提出一种从输入RGB图像中估计三维手-物体姿态的、对遮挡鲁棒且精确的方法。我们的方法首先使用基于CNN的模型定位手部关节点,然后通过提取上下文信息对其进行细化。随后,自注意力Transformer识别出特定关节点及其所属的手部身份。这有助于模型确定特定关节点的手部归属,从而即使在遮挡区域也能检测到关节点。进一步地,这些带有手部身份的关节点被用于通过交叉注意力机制估计姿态。因此,通过识别遮挡区域的关节点,所获得的网络对遮挡具有鲁棒性。因此,该网络在InterHand2.6M、HO3D和H$_2$O3D数据集上评估时取得了最先进的结果。