Growing reliance on LLMs for psychiatric self-assessment raises questions about their ability to interpret qualitative patient narratives. This depth-first case study provides the first direct comparison of state-of-the-art LLMs and mental health professionals in assessing Borderline (BPD) and Narcissistic (NPD) Personality Disorders based on Polish-language first-person autobiographical accounts. Within our sample, the overall diagnostic scores of the top-performing Gemini Pro models (65.48%) were 21.91 percentage points higher than the average scores of the human professionals (43.57%). While both models and human experts excelled at identifying BPD (F1 = 83.4 & F1 = 80.0, respectively), models severely underdiagnosed NPD (F1 = 6.7 vs. 50.0), showing a potential reluctance toward the value-laden term "narcissism." Qualitatively, models provided confident, elaborate justifications focused on patterns and formal categories, while human experts remained concise and cautious, emphasizing the patients' sense of self and temporal experience. Our findings demonstrate that while LLMs might be competent at interpreting complex first-person clinical data, their outputs still carry critical reliability and bias issues.
翻译:随着对大语言模型用于精神病学自我评估的日益依赖,其解读定性患者叙事的能力引发质疑。这项深度案例研究首次直接比较了最先进的大语言模型与心理健康专业人员基于波兰语第一人称自述对边缘型人格障碍(BPD)和自恋型人格障碍(NPD)的评估效果。在我们的样本中,表现最佳的Gemini Pro模型总体诊断得分(65.48%)比人类专业人员平均得分(43.57%)高出21.91个百分点。虽然模型和人类专家均擅长识别BPD(F1值分别为83.4和80.0),但模型严重低估了NPD(F1值6.7对比50.0),表现出对价值负载词"自恋"的潜在回避倾向。定性分析显示,模型能提供自信且详尽的诊断理由,聚焦于模式与形式分类;而人类专家则保持简洁谨慎,更强调患者的自我感知与时间体验。我们的研究结果表明,尽管大语言模型可能具备解读复杂第一人称临床数据的能力,但其输出仍存在关键可靠性问题和偏差隐患。