Early detection of depression from social media data offers a valuable opportunity for timely intervention. However, this task poses significant challenges, requiring both professional medical knowledge and the development of accurate and explainable models. In this paper, we propose LLM-MTD (Large Language Model for Multi-Task Depression Detection), a novel approach that leverages a pre-trained large language model to simultaneously classify social media posts for depression and generate textual explanations grounded in medical diagnostic criteria. We train our model using a multi-task learning framework with a combined loss function that optimizes both classification accuracy and explanation quality. We evaluate LLM-MTD on the benchmark Reddit Self-Reported Depression Dataset (RSDD) and compare its performance against several competitive baseline methods, including traditional machine learning and fine-tuned BERT. Our experimental results demonstrate that LLM-MTD achieves state-of-the-art performance in depression detection, showing significant improvements in AUPRC and other key metrics. Furthermore, human evaluation of the generated explanations reveals their relevance, completeness, and medical accuracy, highlighting the enhanced interpretability of our approach. This work contributes a novel methodology for depression detection that combines the power of large language models with the crucial aspect of explainability.
翻译:利用社交媒体数据进行抑郁症早期检测为及时干预提供了宝贵机遇。然而,该任务面临显著挑战,既需要专业医学知识,又需构建准确且可解释的模型。本文提出LLM-MTD(面向多任务抑郁症检测的大型语言模型),该方法利用预训练大型语言模型同时对社交媒体帖子进行抑郁症分类,并生成基于医学诊断标准的文本解释。我们采用多任务学习框架训练模型,通过结合分类准确性与解释质量的复合损失函数进行优化。我们在基准数据集Reddit自报告抑郁症数据集(RSDD)上评估LLM-MTD,并与包括传统机器学习方法和微调BERT在内的多个竞争性基线方法进行性能比较。实验结果表明,LLM-MTD在抑郁症检测中达到最先进性能,在AUPRC等关键指标上实现显著提升。此外,对生成解释的人工评估证实了其相关性、完整性和医学准确性,凸显了本方法在可解释性方面的增强优势。本研究贡献了一种结合大型语言模型能力与可解释性关键维度的抑郁症检测新方法论。