We present a knowledge augmentation strategy for assessing the diagnostic groups and gait impairment from monocular gait videos. Based on a large-scale pre-trained Vision Language Model (VLM), our model learns and improves visual, textual, and numerical representations of patient gait videos, through a collective learning across three distinct modalities: gait videos, class-specific descriptions, and numerical gait parameters. Our specific contributions are two-fold: First, we adopt a knowledge-aware prompt tuning strategy to utilize the class-specific medical description in guiding the text prompt learning. Second, we integrate the paired gait parameters in the form of numerical texts to enhance the numeracy of the textual representation. Results demonstrate that our model not only significantly outperforms state-of-the-art (SOTA) in video-based classification tasks but also adeptly decodes the learned class-specific text features into natural language descriptions using the vocabulary of quantitative gait parameters. The code and the model will be made available at our project page.
翻译:我们提出了一种知识增强策略,用于从单目步态视频中评估诊断分组及步态障碍。基于大规模预训练视觉语言模型(VLM),我们的模型通过跨三个不同模态的联合学习——步态视频、类别特定描述和数值步态参数——来学习并改进患者步态视频的视觉、文本和数值表征。我们的具体贡献有两个方面:首先,采用知识感知提示调优策略,利用类别特定医学描述来指导文本提示学习;其次,以数值文本形式整合配对步态参数,增强文本表征的数值能力。结果表明,我们的模型不仅在基于视频的分类任务中显著超越当前最优方法(SOTA),还能巧妙地将学习到的类别特定文本特征,通过定量步态参数词汇解码为自然语言描述。代码和模型将在我们的项目页面上公开提供。