The anthropomorphism of grasping process significantly benefits the experience and grasping efficiency of prosthetic hand wearers. Currently, prosthetic hands controlled by signals such as brain-computer interfaces (BCI) and electromyography (EMG) face difficulties in precisely recognizing the amputees' grasping gestures and executing anthropomorphic grasp processes. Although prosthetic hands equipped with vision systems enables the objects' feature recognition, they lack perception of human grasping intention. Therefore, this paper explores the estimation of grasping gestures solely through visual data to accomplish anthropopathic grasping control and the determination of grasping intention within a multi-object environment. To address this, we propose the Spatial Geometry-based Gesture Mapping (SG-GM) method, which constructs gesture functions based on the geometric features of the human hand grasping processes. It's subsequently implemented on the prosthetic hand. Furthermore, we propose the Motion Trajectory Regression-based Grasping Intent Estimation (MTR-GIE) algorithm. This algorithm predicts pre-grasping object utilizing regression prediction and prior spatial segmentation estimation derived from the prosthetic hand's position and trajectory. The experiments were conducted to grasp 8 common daily objects including cup, fork, etc. The experimental results presented a similarity coefficient $R^{2}$ of grasping process of 0.911, a Root Mean Squared Error ($RMSE$) of 2.47\degree, a success rate of grasping of 95.43$\%$, and an average duration of grasping process of 3.07$\pm$0.41 s. Furthermore, grasping experiments in a multi-object environment were conducted. The average accuracy of intent estimation reached 94.35$\%$. Our methodologies offer a groundbreaking approach to enhance the prosthetic hand's functionality and provides valuable insights for future research.
翻译:抓握过程的拟人化能显著提升假肢手佩戴者的使用体验与抓握效率。目前,由脑机接口(BCI)和肌电图(EMG)等信号控制的假肢手在精确识别截肢者抓握手势及执行拟人化抓握过程方面面临困难。尽管配备视觉系统的假肢手能够识别物体特征,但其缺乏对人类抓握意图的感知。因此,本文探索仅通过视觉数据来估计抓握手势,以实现拟人化抓握控制,并在多物体环境中确定抓握意图。为此,我们提出了基于空间几何的手势映射(SG-GM)方法,该方法根据人手抓握过程的几何特征构建手势函数,并随后在假肢手上实现。此外,我们提出了基于运动轨迹回归的抓握意图估计(MTR-GIE)算法。该算法利用回归预测以及从假肢手位置和轨迹推导出的先验空间分割估计,来预测预抓取物体。实验抓取了包括杯子、叉子等在内的8种常见日常物体。实验结果显示,抓握过程的相似系数 $R^{2}$ 为0.911,均方根误差($RMSE$)为2.47\degree,抓握成功率为95.43$\%$,抓握过程的平均持续时间为3.07$\pm$0.41秒。此外,进行了多物体环境下的抓握实验。意图估计的平均准确率达到94.35$\%$。我们的方法为增强假肢手功能提供了一种开创性途径,并为未来研究提供了有价值的见解。