Pain management and severity detection are crucial for effective treatment, yet traditional self-reporting methods are subjective and may be unsuitable for non-verbal individuals (people with limited speaking skills). To address this limitation, we explore automated pain detection using facial expressions. Our study leverages deep learning techniques to improve pain assessment by analyzing facial images from the Pain Emotion Faces Database (PEMF). We propose two novel approaches1: (1) a hybrid ConvNeXt model combined with Long Short-Term Memory (LSTM) blocks to analyze video frames and predict pain presence, and (2) a Spatio-Temporal Graph Convolution Network (STGCN) integrated with LSTM to process landmarks from facial images for pain detection. Our work represents the first use of the PEMF dataset for binary pain classification and demonstrates the effectiveness of these models through extensive experimentation. The results highlight the potential of combining spatial and temporal features for enhanced pain detection, offering a promising advancement in objective pain assessment methodologies.
翻译:疼痛管理与严重程度检测对于有效治疗至关重要,然而传统的自我报告方法具有主观性,且可能不适用于非言语个体(言语能力受限者)。为克服这一局限,本研究探索利用面部表情进行自动化疼痛检测。我们借助深度学习技术,通过分析来自疼痛情绪面部数据库(PEMF)的面部图像来改进疼痛评估。我们提出了两种创新方法:(1)将混合ConvNeXt模型与长短期记忆(LSTM)模块相结合,以分析视频帧并预测疼痛存在;(2)将时空图卷积网络(STGCN)与LSTM集成,以处理面部图像中的关键点进行疼痛检测。本研究首次将PEMF数据集用于二元疼痛分类,并通过大量实验验证了这些模型的有效性。结果凸显了结合空间与时间特征以增强疼痛检测的潜力,为客观疼痛评估方法提供了有前景的进展。