Musculoskeletal disorders pose significant risks to athletes, and assessing risk early is important for prevention. However, most existing methods are designed for controlled settings and fail to reliably assess risk in complex environments due to their reliance on a single type of data. This research introduces ViSK-GAT (Visual-Skeletal Geometric Attention Transformer), a novel multimodal deep learning framework that classifies musculoskeletal risk using both visual and skeletal coordinate-based features. A custom multimodal dataset (MusDis-Sports) was created by combining images and skeletal coordinates, with each sample labeled into eight risk categories based on the Rapid Entire Body Assessment (REBA) system. ViSK-GAT integrates two innovative modules: the Fine-Grained Attention Module (FGAM), which refines inter-modal features via cross-attention between visual and skeletal inputs, and the Multimodal Geometric Correspondence Module (MGCM), which enhances cross-modal alignment between image features and coordinates. The model achieved robust performance, with all key metrics exceeding 93%. Regression results also indicated a low RMSE of 0.1205 and MAE of 0.0156. ViSK-GAT consistently outperformed nine popular transfer learning backbones and showed its potential to advance AI-driven musculoskeletal risk assessment and enable early, impactful interventions in sports.
翻译:肌肉骨骼疾病对运动员构成重大风险,早期评估对于预防至关重要。然而,现有方法大多针对受控环境设计,且因依赖单一数据类型而难以在复杂环境中可靠评估风险。本研究提出ViSK-GAT(视觉-骨骼几何注意力Transformer),这是一种新颖的多模态深度学习框架,通过结合视觉特征与基于骨骼坐标的特征进行肌肉骨骼风险分类。研究通过融合图像与骨骼坐标创建了定制多模态数据集(MusDis-Sports),每个样本均根据快速全身评估(REBA)系统标注为八类风险等级。ViSK-GAT整合了两个创新模块:细粒度注意力模块(FGAM)——通过视觉与骨骼输入间的交叉注意力优化跨模态特征;以及多模态几何对应模块(MGCM)——增强图像特征与坐标间的跨模态对齐。该模型实现了稳健性能,所有关键指标均超过93%。回归结果亦显示较低的均方根误差(0.1205)与平均绝对误差(0.0156)。ViSK-GAT在九种主流迁移学习骨干网络中持续保持优越性能,展现了其在推进人工智能驱动的肌肉骨骼风险评估、实现早期有效运动干预方面的潜力。