Accurate lower-limb joint kinematic estimation is critical for applications such as patient monitoring, rehabilitation, and exoskeleton control. While previous studies have employed wearable sensor-based deep learning (DL) models for estimating joint kinematics, these methods often require extensive new datasets to adapt to unseen gait patterns. Meanwhile, researchers in computer vision have advanced human pose estimation models, which are easy to deploy and capable of real-time inference. However, such models are infeasible in scenarios where cameras cannot be used. To address these limitations, we propose a computer vision-based DL adaptation framework for real-time joint kinematic estimation. This framework requires only a small dataset (i.e., 1-2 gait cycles) and does not depend on professional motion capture setups. Using transfer learning, we adapted our temporal convolutional network (TCN) to stiff knee gait data, allowing the model to further reduce root mean square error by 9.7% and 19.9% compared to a TCN trained on only able-bodied and stiff knee datasets, respectively. Our framework demonstrates a potential for smartphone camera-trained DL models to estimate real-time joint kinematics across novel users in clinical populations with applications in wearable robots.
翻译:准确的下肢关节运动学估计对于患者监测、康复训练和外骨骼控制等应用至关重要。尽管先前研究已采用基于可穿戴传感器的深度学习模型进行关节运动学估计,但这些方法通常需要大量新数据集以适应未见过的步态模式。与此同时,计算机视觉领域的研究者已开发出易于部署且能实时推理的人体姿态估计模型。然而,在无法使用摄像头的场景中,此类模型并不适用。为克服这些局限性,我们提出一种基于计算机视觉的深度学习自适应框架,用于实时关节运动学估计。该框架仅需少量数据(即1-2个步态周期),且不依赖专业动作捕捉设备。通过迁移学习,我们将时序卷积网络适配至僵硬膝步态数据,使模型相较于仅使用健康人群和僵硬膝数据集训练的TCN,均方根误差分别进一步降低9.7%和19.9%。我们的框架展示了智能手机摄像头训练的深度学习模型在临床人群中跨新用户实时估计关节运动学的潜力,为可穿戴机器人应用提供了新途径。