Dynamic prediction of locomotor capacity after stroke could enable more individualized rehabilitation, yet current assessments largely provide static impairment scores and do not indicate whether patients can perform specific tasks such as slope walking or stair climbing. Here, we present a wearable-informed data-physics hybrid generative framework that reconstructs a stroke survivor's locomotor control from wearable inertial sensing and predicts task-conditioned post-stroke locomotion in new environments. From a single 20 m level-ground walking trial recorded by five IMUs, the framework personalizes a physics-based digital avatar using a healthy-motion prior and hybrid imitation learning, generating dynamically feasible, patient-specific movements for inclined walking and stair negotiation. Across 11 stroke inpatients, predicted postures reached 82.2% similarity for slopes and 69.9% for stairs, substantially exceeding a physics-only baseline. In a multicentre pilot randomized study (n = 21; 28 days), access to scenario-specific locomotion predictions to support task selection and difficulty titration was associated with larger gains in Fugl-Meyer lower-extremity scores than standard care (mean change 6.0 vs 3.7 points; $p < 0.05$). These results suggest that wearable-informed generative digital avatars may augment individualized gait rehabilitation planning and provide a pathway toward dynamically personalized post-stroke motor recovery strategies.
翻译:卒中后行走能力的动态预测可实现更个体化的康复,但目前评估主要提供静态损伤评分,且无法指示患者能否执行特定任务(如坡道行走或上下楼梯)。本文提出一种基于可穿戴设备信息的数据-物理混合生成框架,该框架通过可穿戴惯性传感重建卒中幸存者的行走控制机制,并预测其在新环境中的任务条件化卒中后行走能力。仅需通过五个IMU记录的单次20米平地行走试验,该框架即能利用健康运动先验知识与混合模仿学习技术,构建个性化的基于物理原理的数字化身,生成适用于斜坡行走和楼梯通行的动态可行且患者特异性的运动轨迹。在11名住院卒中患者中,预测姿态与真实动作的相似度在坡道任务达到82.2%,楼梯任务达到69.9%,显著超越纯物理基准模型。在一项多中心试点随机研究(n=21;28天周期)中,利用场景特异性行走预测来支持任务选择与难度调节的方案,相较于标准护理在Fugl-Meyer下肢评分方面获得更大改善(平均变化6.0分 vs 3.7分;$p < 0.05$)。这些结果表明,基于可穿戴设备信息的生成式数字化身可增强个体化步态康复规划,并为实现动态个性化的卒中后运动功能恢复策略提供新途径。