Post-Traumatic Stress Disorder (PTSD) remains underdiagnosed in clinical settings, presenting opportunities for automated detection to identify patients. This study evaluates natural language processing approaches for detecting PTSD from clinical interview transcripts. We compared general and mental health-specific transformer models (BERT/RoBERTa), embedding-based methods (SentenceBERT/LLaMA), and large language model prompting strategies (zero-shot/few-shot/chain-of-thought) using the DAIC-WOZ dataset. Domain-specific models significantly outperformed general models (Mental-RoBERTa F1=0.643 vs. RoBERTa-base 0.485). LLaMA embeddings with neural networks achieved the highest performance (F1=0.700). Zero-shot prompting using DSM-5 criteria yielded competitive results without training data (F1=0.657). Performance varied significantly across symptom severity and comorbidity status, with higher accuracy for severe PTSD cases and patients with comorbid depression. Our findings highlight the potential of domain-adapted embeddings and LLMs for scalable screening while underscoring the need for improved detection of nuanced presentations and offering insights for developing clinically viable AI tools for PTSD assessment.
翻译:创伤后应激障碍(PTSD)在临床环境中仍存在诊断不足的问题,这为通过自动检测识别患者提供了机会。本研究评估了从临床访谈记录中检测PTSD的自然语言处理方法。我们使用DAIC-WOZ数据集,比较了通用及心理健康领域的专用Transformer模型(BERT/RoBERTa)、基于嵌入的方法(SentenceBERT/LLaMA)以及大语言模型的提示策略(零样本/少样本/思维链)。领域专用模型显著优于通用模型(Mental-RoBERTa F1=0.643 vs. RoBERTa-base 0.485)。结合神经网络的LLaMA嵌入取得了最佳性能(F1=0.700)。基于DSM-5标准的零样本提示策略在无需训练数据的情况下获得了有竞争力的结果(F1=0.657)。模型性能在症状严重程度和共病状态方面存在显著差异,对重度PTSD病例及伴有抑郁共病的患者检测准确率更高。我们的研究结果凸显了领域自适应嵌入与大语言模型在可扩展筛查方面的潜力,同时强调需要提升对细微临床表现的检测能力,并为开发临床可用的PTSD评估AI工具提供了见解。