Disorganized thinking is a key diagnostic indicator of schizophrenia-spectrum disorders. Recently, clinical estimates of the severity of disorganized thinking have been shown to correlate with measures of how difficult speech transcripts would be for large language models (LLMs) to predict. However, LLMs' deployment challenges -- including privacy concerns, computational and financial costs, and lack of transparency of training data -- limit their clinical utility. We investigate whether smaller neural language models can serve as effective alternatives for detecting positive formal thought disorder, using the same sliding window based perplexity measurements that proved effective with larger models. Surprisingly, our results show that smaller models are more sensitive to linguistic differences associated with formal thought disorder than their larger counterparts. Detection capability declines beyond a certain model size and context length, challenging the common assumption of ``bigger is better'' for LLM-based applications. Our findings generalize across audio diaries and clinical interview speech samples from individuals with psychotic symptoms, suggesting a promising direction for developing efficient, cost-effective, and privacy-preserving screening tools that can be deployed in both clinical and naturalistic settings.
翻译:思维紊乱是精神分裂症谱系障碍的关键诊断指标。近期研究表明,思维紊乱严重程度的临床评估与大型语言模型(LLMs)预测语音转录文本的难度指标存在相关性。然而,LLMs在部署过程中面临诸多挑战——包括隐私问题、计算与财务成本、训练数据不透明等——限制了其临床应用价值。本研究探讨了小型神经语言模型是否可作为检测阳性形式思维障碍的有效替代方案,采用与大型模型相同的基于滑动窗口的困惑度测量方法。令人惊讶的是,我们的研究结果显示,小型模型对形式思维障碍相关语言差异的敏感度优于大型模型。当模型规模和上下文长度超过特定阈值时,检测能力反而下降,这对LLM应用中"越大越好"的普遍假设提出了挑战。我们的发现在精神病症状个体的音频日记和临床访谈语音样本中均得到验证,为开发高效、经济、保护隐私的筛查工具指明了方向,这类工具可同时适用于临床环境和自然场景下的部署。