Stress is a pervasive global health issue that can lead to severe mental health problems. Early detection offers timely intervention and prevention of stress-related disorders. The current early detection models perform "black box" inference suffering from limited explainability and trust which blocks the real-world clinical application. Thanks to the generative properties introduced by the Large Language Models (LLMs), the decision and the prediction from such models are semi-interpretable through the corresponding description. However, the existing LLMs are mostly trained for general purposes without the guidance of psychological cognitive theory. To this end, we first highlight the importance of prior theory with the observation of performance boosted by the chain-of-thoughts tailored for stress detection. This method termed Cognition Chain explicates the generation of stress through a step-by-step cognitive perspective based on cognitive appraisal theory with a progress pipeline: Stimulus $\rightarrow$ Evaluation $\rightarrow$ Reaction $\rightarrow$ Stress State, guiding LLMs to provide comprehensive reasoning explanations. We further study the benefits brought by the proposed Cognition Chain format by utilising it as a synthetic dataset generation template for LLMs instruction-tuning and introduce CogInstruct, an instruction-tuning dataset for stress detection. This dataset is developed using a three-stage self-reflective annotation pipeline that enables LLMs to autonomously generate and refine instructional data. By instruction-tuning Llama3 with CogInstruct, we develop CogLLM, an explainable stress detection model. Evaluations demonstrate that CogLLM achieves outstanding performance while enhancing explainability. Our work contributes a novel approach by integrating cognitive theories into LLM reasoning processes, offering a promising direction for future explainable AI research.
翻译:压力是一种普遍存在的全球性健康问题,可能导致严重的心理健康问题。早期检测能够为压力相关障碍提供及时干预和预防。目前的早期检测模型采用"黑箱"推理,其可解释性和可信度有限,阻碍了实际临床应用。得益于大语言模型(LLMs)所引入的生成特性,此类模型的决策和预测可通过相应描述实现半可解释性。然而,现有LLMs大多为通用目的训练,缺乏心理学认知理论的指导。为此,我们首先通过观察专为压力检测设计的思维链带来的性能提升,强调了先验理论的重要性。这种称为认知链的方法基于认知评估理论,通过逐步认知视角阐明压力生成过程,其进展流程为:刺激$\rightarrow$评估$\rightarrow$反应$\rightarrow$压力状态,引导LLMs提供全面的推理解释。我们进一步研究了所提认知链格式带来的优势,将其用作LLMs指令微调的合成数据集生成模板,并引入了CogInstruct——一个用于压力检测的指令微调数据集。该数据集通过三阶段自反思标注流程开发,使LLMs能够自主生成和优化指令数据。通过使用CogInstruct对Llama3进行指令微调,我们开发了可解释的压力检测模型CogLLM。评估表明,CogLLM在提升可解释性的同时实现了卓越性能。我们的工作通过将认知理论整合到LLM推理过程中,贡献了一种新颖方法,为未来可解释人工智能研究提供了有前景的方向。